Jan 23 06:19:55 crc systemd[1]: Starting Kubernetes Kubelet... Jan 23 06:19:55 crc restorecon[4695]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:55 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:19:56 crc restorecon[4695]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:19:56 crc restorecon[4695]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 23 06:19:57 crc kubenswrapper[4784]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 06:19:57 crc kubenswrapper[4784]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 23 06:19:57 crc kubenswrapper[4784]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 06:19:57 crc kubenswrapper[4784]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 06:19:57 crc kubenswrapper[4784]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 23 06:19:57 crc kubenswrapper[4784]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.028912 4784 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032478 4784 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032503 4784 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032510 4784 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032516 4784 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032523 4784 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032533 4784 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032542 4784 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032549 4784 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032555 4784 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032562 4784 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032569 4784 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032575 4784 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032581 4784 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032593 4784 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032599 4784 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032604 4784 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032609 4784 feature_gate.go:330] unrecognized feature gate: Example Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032615 4784 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032621 4784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032626 4784 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032632 4784 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032637 4784 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032642 4784 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032647 4784 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032652 4784 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032657 4784 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032663 4784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032670 4784 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032677 4784 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032683 4784 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032690 4784 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032696 4784 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032701 4784 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032706 4784 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032712 4784 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032717 4784 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032722 4784 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032728 4784 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032733 4784 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032738 4784 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032743 4784 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032768 4784 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032775 4784 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032781 4784 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032787 4784 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032794 4784 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032799 4784 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032807 4784 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032812 4784 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032819 4784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032824 4784 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032830 4784 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032835 4784 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032841 4784 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032847 4784 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032853 4784 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032858 4784 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032863 4784 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032870 4784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032875 4784 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032880 4784 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032886 4784 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032891 4784 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032895 4784 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032900 4784 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032906 4784 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032910 4784 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032920 4784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032926 4784 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032932 4784 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.032939 4784 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033239 4784 flags.go:64] FLAG: --address="0.0.0.0" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033256 4784 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033266 4784 flags.go:64] FLAG: --anonymous-auth="true" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033273 4784 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033282 4784 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033288 4784 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033295 4784 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033303 4784 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033310 4784 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033318 4784 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033326 4784 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033334 4784 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033341 4784 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033348 4784 flags.go:64] FLAG: --cgroup-root="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033355 4784 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033362 4784 flags.go:64] FLAG: --client-ca-file="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033370 4784 flags.go:64] FLAG: --cloud-config="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033375 4784 flags.go:64] FLAG: --cloud-provider="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033382 4784 flags.go:64] FLAG: --cluster-dns="[]" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033390 4784 flags.go:64] FLAG: --cluster-domain="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033396 4784 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033404 4784 flags.go:64] FLAG: --config-dir="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033410 4784 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033418 4784 flags.go:64] FLAG: --container-log-max-files="5" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033428 4784 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033436 4784 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033443 4784 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033451 4784 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033458 4784 flags.go:64] FLAG: --contention-profiling="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033465 4784 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033472 4784 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033478 4784 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033484 4784 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033492 4784 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033498 4784 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033504 4784 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033510 4784 flags.go:64] FLAG: --enable-load-reader="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033516 4784 flags.go:64] FLAG: --enable-server="true" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033521 4784 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033530 4784 flags.go:64] FLAG: --event-burst="100" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033539 4784 flags.go:64] FLAG: --event-qps="50" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033547 4784 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033554 4784 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033562 4784 flags.go:64] FLAG: --eviction-hard="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033572 4784 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033578 4784 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033584 4784 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033591 4784 flags.go:64] FLAG: --eviction-soft="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033598 4784 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033605 4784 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033615 4784 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033623 4784 flags.go:64] FLAG: --experimental-mounter-path="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033630 4784 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033636 4784 flags.go:64] FLAG: --fail-swap-on="true" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033642 4784 flags.go:64] FLAG: --feature-gates="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033649 4784 flags.go:64] FLAG: --file-check-frequency="20s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033655 4784 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033661 4784 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033667 4784 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033673 4784 flags.go:64] FLAG: --healthz-port="10248" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033679 4784 flags.go:64] FLAG: --help="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033685 4784 flags.go:64] FLAG: --hostname-override="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033690 4784 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033697 4784 flags.go:64] FLAG: --http-check-frequency="20s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033704 4784 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033710 4784 flags.go:64] FLAG: --image-credential-provider-config="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033717 4784 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033724 4784 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033731 4784 flags.go:64] FLAG: --image-service-endpoint="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033738 4784 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033745 4784 flags.go:64] FLAG: --kube-api-burst="100" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033774 4784 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033781 4784 flags.go:64] FLAG: --kube-api-qps="50" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033787 4784 flags.go:64] FLAG: --kube-reserved="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033793 4784 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033799 4784 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033806 4784 flags.go:64] FLAG: --kubelet-cgroups="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033812 4784 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033817 4784 flags.go:64] FLAG: --lock-file="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033823 4784 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033830 4784 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033836 4784 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033846 4784 flags.go:64] FLAG: --log-json-split-stream="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033853 4784 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033859 4784 flags.go:64] FLAG: --log-text-split-stream="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033864 4784 flags.go:64] FLAG: --logging-format="text" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033870 4784 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033876 4784 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033882 4784 flags.go:64] FLAG: --manifest-url="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033888 4784 flags.go:64] FLAG: --manifest-url-header="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033896 4784 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033902 4784 flags.go:64] FLAG: --max-open-files="1000000" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033910 4784 flags.go:64] FLAG: --max-pods="110" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033916 4784 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033922 4784 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033928 4784 flags.go:64] FLAG: --memory-manager-policy="None" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033934 4784 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033939 4784 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033946 4784 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033952 4784 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033965 4784 flags.go:64] FLAG: --node-status-max-images="50" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033971 4784 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033976 4784 flags.go:64] FLAG: --oom-score-adj="-999" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033983 4784 flags.go:64] FLAG: --pod-cidr="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.033991 4784 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034002 4784 flags.go:64] FLAG: --pod-manifest-path="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034008 4784 flags.go:64] FLAG: --pod-max-pids="-1" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034015 4784 flags.go:64] FLAG: --pods-per-core="0" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034021 4784 flags.go:64] FLAG: --port="10250" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034027 4784 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034032 4784 flags.go:64] FLAG: --provider-id="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034038 4784 flags.go:64] FLAG: --qos-reserved="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034043 4784 flags.go:64] FLAG: --read-only-port="10255" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034049 4784 flags.go:64] FLAG: --register-node="true" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034054 4784 flags.go:64] FLAG: --register-schedulable="true" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034060 4784 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034070 4784 flags.go:64] FLAG: --registry-burst="10" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034075 4784 flags.go:64] FLAG: --registry-qps="5" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034082 4784 flags.go:64] FLAG: --reserved-cpus="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034088 4784 flags.go:64] FLAG: --reserved-memory="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034097 4784 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034105 4784 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034112 4784 flags.go:64] FLAG: --rotate-certificates="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034118 4784 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034125 4784 flags.go:64] FLAG: --runonce="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034130 4784 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034137 4784 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034143 4784 flags.go:64] FLAG: --seccomp-default="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034148 4784 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034155 4784 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034161 4784 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034167 4784 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034173 4784 flags.go:64] FLAG: --storage-driver-password="root" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034179 4784 flags.go:64] FLAG: --storage-driver-secure="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034186 4784 flags.go:64] FLAG: --storage-driver-table="stats" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034192 4784 flags.go:64] FLAG: --storage-driver-user="root" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034200 4784 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034207 4784 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034215 4784 flags.go:64] FLAG: --system-cgroups="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034221 4784 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034231 4784 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034237 4784 flags.go:64] FLAG: --tls-cert-file="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034242 4784 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034252 4784 flags.go:64] FLAG: --tls-min-version="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034259 4784 flags.go:64] FLAG: --tls-private-key-file="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034266 4784 flags.go:64] FLAG: --topology-manager-policy="none" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034273 4784 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034280 4784 flags.go:64] FLAG: --topology-manager-scope="container" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034288 4784 flags.go:64] FLAG: --v="2" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034303 4784 flags.go:64] FLAG: --version="false" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034313 4784 flags.go:64] FLAG: --vmodule="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034320 4784 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034326 4784 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034487 4784 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034496 4784 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034503 4784 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034511 4784 feature_gate.go:330] unrecognized feature gate: Example Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034518 4784 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034525 4784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034531 4784 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034538 4784 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034545 4784 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034551 4784 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034557 4784 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034564 4784 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034570 4784 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034576 4784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034582 4784 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034588 4784 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034594 4784 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034600 4784 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034606 4784 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034611 4784 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034616 4784 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034620 4784 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034625 4784 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034632 4784 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034639 4784 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034645 4784 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034651 4784 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034657 4784 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034663 4784 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034668 4784 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034674 4784 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034680 4784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034686 4784 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034692 4784 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034697 4784 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034702 4784 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034708 4784 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034714 4784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034720 4784 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034725 4784 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034730 4784 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034735 4784 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034741 4784 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034746 4784 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034772 4784 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034780 4784 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034785 4784 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034791 4784 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034797 4784 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034802 4784 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034808 4784 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034814 4784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034819 4784 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034824 4784 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034829 4784 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034833 4784 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034838 4784 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034845 4784 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034850 4784 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034856 4784 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034861 4784 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034866 4784 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034871 4784 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034876 4784 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034881 4784 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034887 4784 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034892 4784 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034898 4784 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034905 4784 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034911 4784 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.034917 4784 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.034933 4784 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.045849 4784 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.045901 4784 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046034 4784 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046057 4784 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046067 4784 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046080 4784 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046095 4784 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046106 4784 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046114 4784 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046123 4784 feature_gate.go:330] unrecognized feature gate: Example Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046132 4784 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046140 4784 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046152 4784 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046164 4784 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046173 4784 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046182 4784 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046190 4784 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046199 4784 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046208 4784 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046216 4784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046224 4784 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046233 4784 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046242 4784 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046253 4784 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046263 4784 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046272 4784 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046281 4784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046290 4784 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046299 4784 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046307 4784 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046315 4784 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046324 4784 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046333 4784 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046343 4784 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046352 4784 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046361 4784 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046371 4784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046379 4784 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046390 4784 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046400 4784 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046409 4784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046418 4784 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046427 4784 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046436 4784 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046444 4784 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046452 4784 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046460 4784 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046468 4784 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046476 4784 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046484 4784 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046492 4784 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046499 4784 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046507 4784 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046515 4784 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046523 4784 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046531 4784 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046539 4784 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046546 4784 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046554 4784 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046562 4784 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046572 4784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046580 4784 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046588 4784 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046598 4784 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046606 4784 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046614 4784 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046622 4784 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046630 4784 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046637 4784 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046645 4784 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046653 4784 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046661 4784 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046670 4784 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.046683 4784 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046933 4784 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046949 4784 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046957 4784 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046965 4784 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046974 4784 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046982 4784 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046989 4784 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.046997 4784 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047006 4784 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047014 4784 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047022 4784 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047031 4784 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047040 4784 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047047 4784 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047055 4784 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047063 4784 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047071 4784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047079 4784 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047087 4784 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047095 4784 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047103 4784 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047112 4784 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047119 4784 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047129 4784 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047137 4784 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047145 4784 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047153 4784 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047163 4784 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047173 4784 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047182 4784 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047191 4784 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047200 4784 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047208 4784 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047216 4784 feature_gate.go:330] unrecognized feature gate: Example Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047227 4784 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047236 4784 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047244 4784 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047253 4784 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047288 4784 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047297 4784 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047307 4784 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047316 4784 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047325 4784 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047333 4784 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047341 4784 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047350 4784 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047358 4784 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047369 4784 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047378 4784 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047386 4784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047393 4784 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047405 4784 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047415 4784 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047424 4784 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047432 4784 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047440 4784 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047450 4784 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047460 4784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047469 4784 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047477 4784 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047485 4784 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047493 4784 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047501 4784 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047509 4784 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047517 4784 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047524 4784 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047532 4784 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047540 4784 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047548 4784 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047558 4784 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.047569 4784 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.047582 4784 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.048162 4784 server.go:940] "Client rotation is on, will bootstrap in background" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.052434 4784 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.052566 4784 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.053459 4784 server.go:997] "Starting client certificate rotation" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.053507 4784 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.053801 4784 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-06 21:45:40.824643399 +0000 UTC Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.053940 4784 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.091790 4784 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.093805 4784 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.093987 4784 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.102241 4784 log.go:25] "Validated CRI v1 runtime API" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.115739 4784 log.go:25] "Validated CRI v1 image API" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.117724 4784 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.120561 4784 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-23-06-14-34-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.120618 4784 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.142920 4784 manager.go:217] Machine: {Timestamp:2026-01-23 06:19:57.141631506 +0000 UTC m=+0.374139490 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:0719c803-6211-4272-a78a-6e99726b5e37 BootID:6bf1eead-6d5f-443a-9fe0-75bfca2eafd3 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:4c:7d:04 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:4c:7d:04 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:3a:76:54 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2c:ee:15 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b6:2a:7f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:10:0e:7d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:52:c9:6a:e7:d9:d1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:da:d3:14:e5:af:39 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.143211 4784 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.143529 4784 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.144499 4784 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.144808 4784 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.144870 4784 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.145184 4784 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.145196 4784 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.145371 4784 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.145412 4784 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.145791 4784 state_mem.go:36] "Initialized new in-memory state store" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.145930 4784 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.146556 4784 kubelet.go:418] "Attempting to sync node with API server" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.146580 4784 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.146608 4784 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.146625 4784 kubelet.go:324] "Adding apiserver pod source" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.146639 4784 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.148734 4784 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.217:6443: connect: connection refused Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.148744 4784 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.217:6443: connect: connection refused Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.148877 4784 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.149101 4784 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.149264 4784 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.149771 4784 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.150739 4784 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.151394 4784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.151574 4784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.151654 4784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.151720 4784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.151809 4784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.151880 4784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.151941 4784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.152009 4784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.152092 4784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.152159 4784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.152230 4784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.152290 4784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.152726 4784 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.153729 4784 server.go:1280] "Started kubelet" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.153744 4784 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.217:6443: connect: connection refused Jan 23 06:19:57 crc systemd[1]: Started Kubernetes Kubelet. Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.170974 4784 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.154106 4784 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.172531 4784 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.217:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d47d24b02fee7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 06:19:57.153664743 +0000 UTC m=+0.386172717,LastTimestamp:2026-01-23 06:19:57.153664743 +0000 UTC m=+0.386172717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.173031 4784 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.173696 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.173804 4784 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.173895 4784 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.173918 4784 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.173870 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:07:30.308572933 +0000 UTC Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.174002 4784 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.174286 4784 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.174556 4784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="200ms" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.175927 4784 factory.go:55] Registering systemd factory Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.176089 4784 factory.go:221] Registration of the systemd container factory successfully Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.176692 4784 factory.go:153] Registering CRI-O factory Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.176870 4784 factory.go:221] Registration of the crio container factory successfully Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.177015 4784 server.go:460] "Adding debug handlers to kubelet server" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.177174 4784 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.177333 4784 factory.go:103] Registering Raw factory Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.177465 4784 manager.go:1196] Started watching for new ooms in manager Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.177189 4784 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.217:6443: connect: connection refused Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.177674 4784 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.180474 4784 manager.go:319] Starting recovery of all containers Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.206656 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.206883 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.206913 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.206939 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.206953 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.206965 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.206978 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.206993 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207010 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207025 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207041 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207053 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207088 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207107 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207122 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207136 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207149 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207162 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207183 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207199 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207215 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207230 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207243 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207258 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207275 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207297 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207337 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207358 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207392 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207407 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207421 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207434 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207447 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207462 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207490 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207537 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207556 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207575 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207625 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207644 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207665 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207680 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207739 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207788 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207809 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207825 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207845 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207865 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207885 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207906 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207931 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207952 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207978 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.207997 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208016 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208035 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208057 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208076 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208093 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208113 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208133 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208153 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208171 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208189 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208205 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208224 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208241 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208257 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208274 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208292 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208308 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208324 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208343 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208363 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208382 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208405 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208429 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208447 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208474 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208493 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208513 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208532 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208549 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208566 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208583 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208604 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208622 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208641 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208662 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208682 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208703 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208724 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208740 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208786 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208805 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208824 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208843 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208861 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208879 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208897 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.208914 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.210818 4784 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.210854 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.210871 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.210886 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.210916 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.210939 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.210963 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.210984 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211004 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211023 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211039 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211056 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211080 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211104 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211133 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211152 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211173 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211194 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211211 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211232 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211251 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211267 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211288 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211312 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211330 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211350 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211365 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211385 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211400 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211412 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211426 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211443 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211458 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211473 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211487 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211500 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211540 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211556 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211574 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211589 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211603 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211616 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211631 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211644 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211657 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211673 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211692 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211714 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211731 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211766 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211783 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211799 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211816 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211837 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211859 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211877 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211896 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211914 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211936 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211955 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211974 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.211995 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212014 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212034 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212055 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212076 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212095 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212120 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212143 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212162 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212197 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212217 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212240 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212262 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212286 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212318 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212338 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212358 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212379 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212402 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212425 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212446 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212467 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212487 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212507 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212524 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212542 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212559 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212576 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212593 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212649 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212667 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212682 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212698 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212715 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212737 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212783 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212805 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212824 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212840 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212855 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212870 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212888 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212909 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212926 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212941 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212954 4784 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212967 4784 reconstruct.go:97] "Volume reconstruction finished" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.212977 4784 reconciler.go:26] "Reconciler: start to sync state" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.229506 4784 manager.go:324] Recovery completed Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.244035 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.247742 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.247798 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.247810 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.248855 4784 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.248874 4784 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.248922 4784 state_mem.go:36] "Initialized new in-memory state store" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.249218 4784 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.252298 4784 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.252368 4784 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.252405 4784 kubelet.go:2335] "Starting kubelet main sync loop" Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.252459 4784 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.255036 4784 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.217:6443: connect: connection refused Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.255108 4784 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.259556 4784 policy_none.go:49] "None policy: Start" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.261494 4784 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.261540 4784 state_mem.go:35] "Initializing new in-memory state store" Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.274118 4784 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.332314 4784 manager.go:334] "Starting Device Plugin manager" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.332420 4784 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.332436 4784 server.go:79] "Starting device plugin registration server" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.332971 4784 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.332989 4784 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.333301 4784 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.333508 4784 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.333525 4784 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.344638 4784 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.352605 4784 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.352798 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.354031 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.354093 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.354110 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.354364 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.354632 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.354725 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.355651 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.355716 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.355814 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.355869 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.355902 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.355921 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.356092 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.356118 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.356169 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.356925 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.356966 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.356980 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.357190 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.357184 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.357288 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.357308 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.357315 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.357367 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.358651 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.358682 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.358691 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.358812 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.358837 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.358851 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.358864 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.359075 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.359120 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.359536 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.359560 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.359570 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.359791 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.359821 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.360217 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.360268 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.360283 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.360680 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.360722 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.360736 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.375950 4784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="400ms" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.415268 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.415335 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.415394 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.415419 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.415439 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.415457 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.415478 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.415499 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.415516 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.415540 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.415582 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.415601 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.415651 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.415855 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.416002 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.433136 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.436396 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.436448 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.436458 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.436482 4784 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.436980 4784 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.217:6443: connect: connection refused" node="crc" Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.489626 4784 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.217:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d47d24b02fee7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 06:19:57.153664743 +0000 UTC m=+0.386172717,LastTimestamp:2026-01-23 06:19:57.153664743 +0000 UTC m=+0.386172717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.517990 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518066 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518097 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518117 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518142 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518164 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518182 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518202 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518223 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518213 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518306 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518298 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518349 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518389 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518245 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518433 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518385 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518435 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518518 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518522 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518571 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518602 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518619 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518630 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518823 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518843 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518635 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518871 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518909 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.518660 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.637056 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.638549 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.638597 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.638610 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.638630 4784 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.639134 4784 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.217:6443: connect: connection refused" node="crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.687119 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.705207 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.712655 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.718881 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.722904 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-b8ec1c8522ab6ff1f6a369fbaac2d88fff7f58d1726b51e2a9aa21290a76de0e WatchSource:0}: Error finding container b8ec1c8522ab6ff1f6a369fbaac2d88fff7f58d1726b51e2a9aa21290a76de0e: Status 404 returned error can't find the container with id b8ec1c8522ab6ff1f6a369fbaac2d88fff7f58d1726b51e2a9aa21290a76de0e Jan 23 06:19:57 crc kubenswrapper[4784]: I0123 06:19:57.737951 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:19:57 crc kubenswrapper[4784]: W0123 06:19:57.741375 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e99b49dc9421517a779cb54199e75e533a74baff4f5d4d528fa147626d260eaa WatchSource:0}: Error finding container e99b49dc9421517a779cb54199e75e533a74baff4f5d4d528fa147626d260eaa: Status 404 returned error can't find the container with id e99b49dc9421517a779cb54199e75e533a74baff4f5d4d528fa147626d260eaa Jan 23 06:19:57 crc kubenswrapper[4784]: E0123 06:19:57.777335 4784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="800ms" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.039529 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.042181 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.042258 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.042275 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.042318 4784 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 06:19:58 crc kubenswrapper[4784]: E0123 06:19:58.042971 4784 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.217:6443: connect: connection refused" node="crc" Jan 23 06:19:58 crc kubenswrapper[4784]: W0123 06:19:58.137852 4784 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.217:6443: connect: connection refused Jan 23 06:19:58 crc kubenswrapper[4784]: E0123 06:19:58.138018 4784 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.154827 4784 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.217:6443: connect: connection refused Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.175414 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 15:58:48.774165218 +0000 UTC Jan 23 06:19:58 crc kubenswrapper[4784]: W0123 06:19:58.181353 4784 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.217:6443: connect: connection refused Jan 23 06:19:58 crc kubenswrapper[4784]: E0123 06:19:58.181488 4784 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:19:58 crc kubenswrapper[4784]: W0123 06:19:58.249696 4784 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.217:6443: connect: connection refused Jan 23 06:19:58 crc kubenswrapper[4784]: E0123 06:19:58.249960 4784 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.259238 4784 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf" exitCode=0 Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.259336 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf"} Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.259442 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7c46605d69fd8d6c5a89c6ae0111ca8f428978758597bb7fb4f0054570006151"} Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.259563 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.261106 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.261185 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.261207 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.261467 4784 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b0c501824c5bc62076c5e2118342ad5dcddb15c3d7d34cd5905eab840aef4d52" exitCode=0 Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.261540 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b0c501824c5bc62076c5e2118342ad5dcddb15c3d7d34cd5905eab840aef4d52"} Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.261567 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9dedffeed6247c4575ca0af4f1ce907536b2b4961b0c7904a75a0bd867ba30d8"} Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.261630 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.263671 4784 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc" exitCode=0 Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.263800 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc"} Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.263882 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e99b49dc9421517a779cb54199e75e533a74baff4f5d4d528fa147626d260eaa"} Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.264052 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.265790 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c"} Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.265824 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b8ec1c8522ab6ff1f6a369fbaac2d88fff7f58d1726b51e2a9aa21290a76de0e"} Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.265880 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.265917 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.265930 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.266063 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.266102 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.266116 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.268597 4784 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37" exitCode=0 Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.268676 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37"} Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.268726 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"28f88e70c46cd185a4fc3cb6583bd7388f2dc4894478341b144dbae11a9f8873"} Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.268939 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.271104 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.271284 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.271509 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.275081 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.279500 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.279529 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.279540 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:58 crc kubenswrapper[4784]: W0123 06:19:58.463648 4784 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.217:6443: connect: connection refused Jan 23 06:19:58 crc kubenswrapper[4784]: E0123 06:19:58.463770 4784 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:19:58 crc kubenswrapper[4784]: E0123 06:19:58.578732 4784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="1.6s" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.844090 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.845437 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.845491 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.845508 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:58 crc kubenswrapper[4784]: I0123 06:19:58.845543 4784 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 06:19:58 crc kubenswrapper[4784]: E0123 06:19:58.846187 4784 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.217:6443: connect: connection refused" node="crc" Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.155425 4784 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.217:6443: connect: connection refused Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.175934 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 05:50:33.177145003 +0000 UTC Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.180107 4784 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 06:19:59 crc kubenswrapper[4784]: E0123 06:19:59.181316 4784 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.217:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.289787 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c"} Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.289850 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070"} Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.293063 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119"} Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.293096 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048"} Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.306184 4784 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d" exitCode=0 Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.306257 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d"} Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.306408 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.307565 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.307594 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.307605 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.311917 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3d8d335a55d46d0af562baebd8a838e5306dc05b5307fc63cf8857eace36ff28"} Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.312015 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.313170 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.313190 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.313199 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.315375 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"48e47f1bb90ab6043480997acbad39cf3f14c4c303a330d2e0ed9d277813999c"} Jan 23 06:19:59 crc kubenswrapper[4784]: I0123 06:19:59.315398 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9d09ce02d12d083603fee9257fe237cd2303aeab94fe3b90607b26f0be5a65df"} Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.176590 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:14:23.226599733 +0000 UTC Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.322186 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6bed583aa95c5bdf643c3feff6b814da7a093d0ff638e37ea75bd4ead2ed6625"} Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.322280 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.323633 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.323708 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.323722 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.327149 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc"} Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.327225 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.328306 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.328353 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.328389 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.331926 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.331988 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64"} Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.332056 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc"} Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.332083 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4"} Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.332839 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.332866 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.332880 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.334410 4784 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628" exitCode=0 Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.334477 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628"} Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.334646 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.335536 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.335566 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.335578 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.446931 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.448970 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.449038 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.449055 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.449093 4784 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 06:20:00 crc kubenswrapper[4784]: I0123 06:20:00.608113 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.177217 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:11:44.800466474 +0000 UTC Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.347654 4784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.348217 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7"} Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.348295 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6"} Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.348319 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc"} Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.348115 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.348230 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.348257 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.350524 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.350570 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.350589 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.350543 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.350670 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.350688 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.351374 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.351452 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:01 crc kubenswrapper[4784]: I0123 06:20:01.351471 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.178403 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:46:01.695620574 +0000 UTC Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.358574 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50"} Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.358641 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.358680 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e"} Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.358623 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.360257 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.360296 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.360311 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.360260 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.360417 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.360444 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.482014 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.482258 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.484067 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.484129 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:02 crc kubenswrapper[4784]: I0123 06:20:02.484148 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:03 crc kubenswrapper[4784]: I0123 06:20:03.178601 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:29:23.628195665 +0000 UTC Jan 23 06:20:03 crc kubenswrapper[4784]: I0123 06:20:03.217369 4784 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 06:20:03 crc kubenswrapper[4784]: I0123 06:20:03.261898 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:20:03 crc kubenswrapper[4784]: I0123 06:20:03.262144 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:03 crc kubenswrapper[4784]: I0123 06:20:03.263737 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:03 crc kubenswrapper[4784]: I0123 06:20:03.263803 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:03 crc kubenswrapper[4784]: I0123 06:20:03.263820 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:03 crc kubenswrapper[4784]: I0123 06:20:03.361346 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:03 crc kubenswrapper[4784]: I0123 06:20:03.362411 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:03 crc kubenswrapper[4784]: I0123 06:20:03.362473 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:03 crc kubenswrapper[4784]: I0123 06:20:03.362487 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:04 crc kubenswrapper[4784]: I0123 06:20:04.178993 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:53:09.333950535 +0000 UTC Jan 23 06:20:04 crc kubenswrapper[4784]: I0123 06:20:04.296993 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:20:04 crc kubenswrapper[4784]: I0123 06:20:04.297268 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:04 crc kubenswrapper[4784]: I0123 06:20:04.298869 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:04 crc kubenswrapper[4784]: I0123 06:20:04.298929 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:04 crc kubenswrapper[4784]: I0123 06:20:04.298943 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:04 crc kubenswrapper[4784]: I0123 06:20:04.545878 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:20:04 crc kubenswrapper[4784]: I0123 06:20:04.546192 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:04 crc kubenswrapper[4784]: I0123 06:20:04.547946 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:04 crc kubenswrapper[4784]: I0123 06:20:04.548014 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:04 crc kubenswrapper[4784]: I0123 06:20:04.548040 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:05 crc kubenswrapper[4784]: I0123 06:20:05.179924 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 17:23:06.180669703 +0000 UTC Jan 23 06:20:05 crc kubenswrapper[4784]: I0123 06:20:05.683327 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 23 06:20:05 crc kubenswrapper[4784]: I0123 06:20:05.683650 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:05 crc kubenswrapper[4784]: I0123 06:20:05.685785 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:05 crc kubenswrapper[4784]: I0123 06:20:05.685841 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:05 crc kubenswrapper[4784]: I0123 06:20:05.685858 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:06 crc kubenswrapper[4784]: I0123 06:20:06.180732 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 00:08:19.52869702 +0000 UTC Jan 23 06:20:06 crc kubenswrapper[4784]: I0123 06:20:06.626383 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:20:06 crc kubenswrapper[4784]: I0123 06:20:06.626704 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:06 crc kubenswrapper[4784]: I0123 06:20:06.628359 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:06 crc kubenswrapper[4784]: I0123 06:20:06.628410 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:06 crc kubenswrapper[4784]: I0123 06:20:06.628428 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:07 crc kubenswrapper[4784]: I0123 06:20:07.182000 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:18:37.553560451 +0000 UTC Jan 23 06:20:07 crc kubenswrapper[4784]: E0123 06:20:07.344926 4784 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.126714 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.127062 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.129813 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.129895 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.129916 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.135302 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.183225 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:40:34.656563486 +0000 UTC Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.339407 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.339873 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.341984 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.342045 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.342057 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.377122 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.378786 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.378833 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.378851 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.382888 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:20:08 crc kubenswrapper[4784]: I0123 06:20:08.801125 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:20:09 crc kubenswrapper[4784]: I0123 06:20:09.183545 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 11:48:07.735196338 +0000 UTC Jan 23 06:20:09 crc kubenswrapper[4784]: I0123 06:20:09.379079 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:09 crc kubenswrapper[4784]: I0123 06:20:09.380384 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:09 crc kubenswrapper[4784]: I0123 06:20:09.380438 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:09 crc kubenswrapper[4784]: I0123 06:20:09.380451 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:09 crc kubenswrapper[4784]: I0123 06:20:09.627223 4784 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 06:20:09 crc kubenswrapper[4784]: I0123 06:20:09.627363 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:20:10 crc kubenswrapper[4784]: I0123 06:20:10.155574 4784 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 23 06:20:10 crc kubenswrapper[4784]: E0123 06:20:10.180950 4784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 23 06:20:10 crc kubenswrapper[4784]: I0123 06:20:10.184296 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:21:17.320811107 +0000 UTC Jan 23 06:20:10 crc kubenswrapper[4784]: W0123 06:20:10.214033 4784 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 23 06:20:10 crc kubenswrapper[4784]: I0123 06:20:10.214147 4784 trace.go:236] Trace[380686670]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 06:20:00.212) (total time: 10001ms): Jan 23 06:20:10 crc kubenswrapper[4784]: Trace[380686670]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:20:10.214) Jan 23 06:20:10 crc kubenswrapper[4784]: Trace[380686670]: [10.001553067s] [10.001553067s] END Jan 23 06:20:10 crc kubenswrapper[4784]: E0123 06:20:10.214177 4784 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 06:20:10 crc kubenswrapper[4784]: I0123 06:20:10.386263 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:10 crc kubenswrapper[4784]: I0123 06:20:10.387484 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:10 crc kubenswrapper[4784]: I0123 06:20:10.387531 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:10 crc kubenswrapper[4784]: I0123 06:20:10.387541 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:10 crc kubenswrapper[4784]: E0123 06:20:10.450302 4784 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 23 06:20:10 crc kubenswrapper[4784]: W0123 06:20:10.706953 4784 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 23 06:20:10 crc kubenswrapper[4784]: I0123 06:20:10.707064 4784 trace.go:236] Trace[521481287]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 06:20:00.704) (total time: 10002ms): Jan 23 06:20:10 crc kubenswrapper[4784]: Trace[521481287]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:20:10.706) Jan 23 06:20:10 crc kubenswrapper[4784]: Trace[521481287]: [10.002043476s] [10.002043476s] END Jan 23 06:20:10 crc kubenswrapper[4784]: E0123 06:20:10.707087 4784 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 06:20:11 crc kubenswrapper[4784]: I0123 06:20:11.185216 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:18:54.986399725 +0000 UTC Jan 23 06:20:11 crc kubenswrapper[4784]: W0123 06:20:11.233690 4784 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 23 06:20:11 crc kubenswrapper[4784]: I0123 06:20:11.233840 4784 trace.go:236] Trace[529770557]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 06:20:01.230) (total time: 10003ms): Jan 23 06:20:11 crc kubenswrapper[4784]: Trace[529770557]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10003ms (06:20:11.233) Jan 23 06:20:11 crc kubenswrapper[4784]: Trace[529770557]: [10.003391683s] [10.003391683s] END Jan 23 06:20:11 crc kubenswrapper[4784]: E0123 06:20:11.233874 4784 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 06:20:11 crc kubenswrapper[4784]: I0123 06:20:11.475560 4784 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 06:20:11 crc kubenswrapper[4784]: I0123 06:20:11.475658 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 06:20:11 crc kubenswrapper[4784]: I0123 06:20:11.483152 4784 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 06:20:11 crc kubenswrapper[4784]: I0123 06:20:11.483235 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 06:20:12 crc kubenswrapper[4784]: I0123 06:20:12.185557 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:01:21.856394436 +0000 UTC Jan 23 06:20:13 crc kubenswrapper[4784]: I0123 06:20:13.186350 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 13:57:07.772099192 +0000 UTC Jan 23 06:20:13 crc kubenswrapper[4784]: I0123 06:20:13.650737 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:13 crc kubenswrapper[4784]: I0123 06:20:13.652617 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:13 crc kubenswrapper[4784]: I0123 06:20:13.652667 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:13 crc kubenswrapper[4784]: I0123 06:20:13.652676 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:13 crc kubenswrapper[4784]: I0123 06:20:13.652709 4784 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 06:20:13 crc kubenswrapper[4784]: E0123 06:20:13.658868 4784 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 23 06:20:14 crc kubenswrapper[4784]: I0123 06:20:14.187386 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:28:22.368640165 +0000 UTC Jan 23 06:20:14 crc kubenswrapper[4784]: I0123 06:20:14.552006 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:20:14 crc kubenswrapper[4784]: I0123 06:20:14.552204 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:14 crc kubenswrapper[4784]: I0123 06:20:14.553419 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:14 crc kubenswrapper[4784]: I0123 06:20:14.553476 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:14 crc kubenswrapper[4784]: I0123 06:20:14.553489 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:14 crc kubenswrapper[4784]: I0123 06:20:14.558942 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:20:14 crc kubenswrapper[4784]: I0123 06:20:14.820129 4784 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 06:20:15 crc kubenswrapper[4784]: I0123 06:20:15.188244 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:08:31.035857691 +0000 UTC Jan 23 06:20:15 crc kubenswrapper[4784]: I0123 06:20:15.403651 4784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:20:15 crc kubenswrapper[4784]: I0123 06:20:15.403771 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:15 crc kubenswrapper[4784]: I0123 06:20:15.405389 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:15 crc kubenswrapper[4784]: I0123 06:20:15.405525 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:15 crc kubenswrapper[4784]: I0123 06:20:15.406041 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:15 crc kubenswrapper[4784]: I0123 06:20:15.474507 4784 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.188971 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:21:51.938719376 +0000 UTC Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.466005 4784 trace.go:236] Trace[534070945]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 06:20:01.538) (total time: 14927ms): Jan 23 06:20:16 crc kubenswrapper[4784]: Trace[534070945]: ---"Objects listed" error: 14927ms (06:20:16.465) Jan 23 06:20:16 crc kubenswrapper[4784]: Trace[534070945]: [14.927400878s] [14.927400878s] END Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.466042 4784 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.466396 4784 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.482259 4784 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.500946 4784 csr.go:261] certificate signing request csr-wsg5x is approved, waiting to be issued Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.507072 4784 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54434->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.507137 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54434->192.168.126.11:17697: read: connection reset by peer" Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.507231 4784 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54436->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.507356 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54436->192.168.126.11:17697: read: connection reset by peer" Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.507573 4784 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.507650 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.520033 4784 csr.go:257] certificate signing request csr-wsg5x is issued Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.794419 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.798868 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:20:16 crc kubenswrapper[4784]: I0123 06:20:16.813684 4784 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.054923 4784 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.055355 4784 controller.go:145] "Failed to ensure lease exists, will retry" err="Post \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases?timeout=10s\": read tcp 38.102.83.217:46674->38.102.83.217:6443: use of closed network connection" interval="6.4s" Jan 23 06:20:17 crc kubenswrapper[4784]: W0123 06:20:17.055499 4784 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 06:20:17 crc kubenswrapper[4784]: W0123 06:20:17.055508 4784 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.055460 4784 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.217:46674->38.102.83.217:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d47d26eb42f8e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 06:19:57.75247963 +0000 UTC m=+0.984987614,LastTimestamp:2026-01-23 06:19:57.75247963 +0000 UTC m=+0.984987614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.158834 4784 apiserver.go:52] "Watching apiserver" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.162535 4784 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.163022 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.163458 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.163602 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.163777 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.163864 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.163925 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.164294 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.164360 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.164374 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.164488 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.167169 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.171646 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.171707 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.171668 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.172199 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.172257 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.172417 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.172463 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.172465 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.175248 4784 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.189265 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:15:32.048664541 +0000 UTC Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.201709 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.222304 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.241354 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.259731 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.273826 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.273890 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.273924 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.273949 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.273975 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274000 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274026 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274063 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274093 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274116 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274139 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274166 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274339 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274387 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274426 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274451 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274475 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274504 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274527 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274556 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274588 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274615 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274638 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274665 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274689 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274731 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274773 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274796 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274820 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274845 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274868 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274893 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274921 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274948 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274975 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.274999 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275023 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275017 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275048 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275124 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275154 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275181 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275215 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275252 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275326 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275355 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275379 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275404 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275428 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275455 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275479 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275539 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.275983 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.276223 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.276234 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.276401 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.276452 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.276637 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.276654 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.276684 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.276665 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.276780 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.276442 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.276897 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.276961 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.277083 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:20:17.777059788 +0000 UTC m=+21.009567772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.277347 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.280685 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.281176 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.281901 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.281976 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.282132 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.282243 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.282210 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.282291 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.282439 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.282446 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.282532 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.282610 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.282697 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.282696 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.282870 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.282900 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.283254 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.283676 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.283833 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.283928 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284040 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284105 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284366 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284388 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284427 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284478 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284539 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284585 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284623 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284642 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284656 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284693 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284722 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284724 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284771 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284805 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284832 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284859 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284888 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284916 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284938 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284963 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.284992 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285018 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285060 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285084 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285108 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285162 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285205 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285384 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285409 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285429 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285454 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285478 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285512 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285550 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285575 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285596 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285615 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285643 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285667 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285691 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285712 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285733 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285776 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285798 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285817 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285838 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285859 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285877 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285901 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.285980 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.286857 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.286925 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.286954 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.286979 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287013 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287036 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287062 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287091 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287111 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287134 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287157 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287178 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287199 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287224 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287246 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287268 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287290 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287316 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287342 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287361 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287407 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287430 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287452 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287472 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287496 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287517 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287537 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287557 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287580 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287604 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287631 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287654 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287677 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287698 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287719 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287740 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287782 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287804 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287826 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287845 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287866 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287885 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287908 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287929 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287951 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287974 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.287998 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288018 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288039 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288060 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288082 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288102 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288124 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288145 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288167 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288187 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288206 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288227 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288246 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288265 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288286 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288308 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288327 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288347 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288368 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288387 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288407 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288426 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288447 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288474 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288500 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288521 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288543 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288565 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288585 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288610 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288633 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288656 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288680 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288701 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288722 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.288772 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.293529 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.293616 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.293664 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.293708 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.293775 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.293815 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.293857 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.293895 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.293959 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.294000 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.294035 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.294070 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.294111 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.294151 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.294194 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.294372 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.294415 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.294590 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.294638 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.294679 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.294727 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.294794 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.294828 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.294977 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.295094 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.295134 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.295168 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.295198 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.295198 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.295248 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.295284 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.295326 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.295816 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.295858 4784 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.295876 4784 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.295931 4784 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.295947 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.295970 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.295986 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.296046 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.296068 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.296083 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.296098 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.296958 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.297158 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.297424 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.297681 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.298071 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.293339 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.296113 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.302410 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.302429 4784 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.302445 4784 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.302461 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.302483 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.302498 4784 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.302514 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.303379 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.303765 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.303817 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.304072 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.304266 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.304528 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.304723 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.304794 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.304542 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.305275 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.305654 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.305719 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.305717 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.306060 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.306214 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.306350 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.306832 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.307240 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.307638 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.308113 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.308247 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.308294 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.305296 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.302529 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.308801 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.308875 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.308954 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.309079 4784 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.309163 4784 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.309264 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.309351 4784 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.309421 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.309506 4784 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.309605 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.309673 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.309732 4784 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.309822 4784 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.309903 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.309986 4784 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.310081 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.310149 4784 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.310218 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.310280 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.310345 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.310412 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.310472 4784 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.304640 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.309485 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.309839 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.310252 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.310736 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.310585 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.311883 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.312874 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.315082 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.311505 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.315348 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.315564 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.315744 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.316042 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.316741 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.317104 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.318395 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.318606 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.318738 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.319516 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.319555 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.319630 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.319880 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.320005 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.319889 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.320044 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.320383 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.320483 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.320594 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.320647 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.322349 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.322375 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.322594 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.322861 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.322923 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.323133 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.323458 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.323514 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.323817 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.323981 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.324325 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.324385 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.324686 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.316995 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.325493 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.325646 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.325795 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.326387 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.326344 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.326608 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.326574 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.325827 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.327410 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.328515 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.328515 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.327440 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.327467 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.328899 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.331953 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.329011 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.331984 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.329088 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.331998 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.329842 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.329854 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.329977 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.330153 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.330526 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.330655 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.330709 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.330712 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.330961 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.330972 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.331032 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.331100 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.331326 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.331340 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.331550 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.331635 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.331731 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.331825 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.332196 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.332030 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.332205 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.332244 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.332378 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.332423 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.332644 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.333049 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.334016 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.334035 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.334405 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.334411 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.334458 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.334968 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.335077 4784 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.335324 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:17.835277086 +0000 UTC m=+21.067785060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.335824 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.335888 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.336399 4784 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.330373 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.336511 4784 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.336609 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:17.836583417 +0000 UTC m=+21.069091591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.337109 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.337191 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.339089 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:17.839051936 +0000 UTC m=+21.071559910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.339259 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.339677 4784 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.338515 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.340009 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.343039 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.350532 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.350610 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.350632 4784 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.350726 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:17.850702554 +0000 UTC m=+21.083210548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.352902 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.356581 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.358729 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.359072 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.360229 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.360458 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.360893 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.361149 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.361413 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.361579 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.361840 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.363194 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.363510 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.363994 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.365598 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.367268 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.368004 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.368218 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.368372 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.369726 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.370972 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.372282 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.372607 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.372906 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.373147 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.373291 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.374564 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.377008 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.384017 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.393292 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.400041 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.403457 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.405090 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.405164 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.410949 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.411452 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.411601 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.411615 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412052 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412199 4784 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412238 4784 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412255 4784 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412267 4784 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412277 4784 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412289 4784 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412301 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412312 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412325 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412338 4784 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412350 4784 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412363 4784 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412376 4784 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412389 4784 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412399 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412410 4784 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412421 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412431 4784 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412442 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412453 4784 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412463 4784 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412474 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412484 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412496 4784 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412505 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412516 4784 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412527 4784 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412538 4784 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412548 4784 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412557 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412567 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412577 4784 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412587 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412598 4784 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412609 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412619 4784 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412630 4784 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412641 4784 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412654 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412664 4784 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412675 4784 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412685 4784 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412697 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412707 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412716 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412725 4784 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412735 4784 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412744 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412772 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412783 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412794 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412806 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412816 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412825 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412836 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412848 4784 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412857 4784 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412867 4784 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412878 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412888 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412900 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412909 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412919 4784 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412929 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412939 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412949 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412959 4784 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412968 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412977 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412987 4784 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.412996 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413006 4784 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413019 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413029 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413039 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413048 4784 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413057 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413067 4784 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413078 4784 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413089 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413099 4784 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413126 4784 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413136 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413147 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413157 4784 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413166 4784 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413176 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413186 4784 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413197 4784 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413205 4784 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413215 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413224 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413232 4784 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413241 4784 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413253 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413262 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413272 4784 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413281 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413292 4784 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413301 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413310 4784 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413319 4784 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413345 4784 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413359 4784 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413369 4784 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413379 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413390 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413401 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413410 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413420 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413429 4784 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413438 4784 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413449 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413460 4784 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413470 4784 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413479 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413490 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413500 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413511 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413521 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413531 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413544 4784 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413553 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413562 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413571 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413581 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413591 4784 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413601 4784 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413611 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413621 4784 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413632 4784 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413643 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413653 4784 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413662 4784 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413672 4784 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413681 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413695 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413705 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413716 4784 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413726 4784 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413736 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413746 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413769 4784 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413779 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413789 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413798 4784 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413809 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413819 4784 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413829 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413839 4784 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413850 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.413860 4784 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.417040 4784 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64" exitCode=255 Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.417918 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64"} Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.425785 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.429513 4784 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.438772 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.439538 4784 scope.go:117] "RemoveContainer" containerID="ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.439665 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.453313 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.480824 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.482356 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.492824 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.500550 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.500960 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:20:17 crc kubenswrapper[4784]: W0123 06:20:17.509989 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-5fe39235f79436ecd4ec0a97670ab274f473c9d8459072c7ebce77f589d5b972 WatchSource:0}: Error finding container 5fe39235f79436ecd4ec0a97670ab274f473c9d8459072c7ebce77f589d5b972: Status 404 returned error can't find the container with id 5fe39235f79436ecd4ec0a97670ab274f473c9d8459072c7ebce77f589d5b972 Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.514240 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.521840 4784 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-23 06:15:16 +0000 UTC, rotation deadline is 2026-12-01 21:35:19.888104744 +0000 UTC Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.521944 4784 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7503h15m2.366165248s for next certificate rotation Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.534427 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.563976 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.610610 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.632064 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.657193 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.683743 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.706207 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.817395 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.817568 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:20:18.817538554 +0000 UTC m=+22.050046528 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.918932 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.918991 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.919214 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:17 crc kubenswrapper[4784]: I0123 06:20:17.919241 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.919355 4784 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.919427 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:18.919405663 +0000 UTC m=+22.151913637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.919541 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.919554 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.919569 4784 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.919597 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:18.919588857 +0000 UTC m=+22.152096831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.919655 4784 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.919682 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:18.919672469 +0000 UTC m=+22.152180443 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.919734 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.919745 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.919771 4784 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:17 crc kubenswrapper[4784]: E0123 06:20:17.919808 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:18.919786742 +0000 UTC m=+22.152294716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.190188 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 22:57:15.342859584 +0000 UTC Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.422523 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047"} Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.422586 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5fe39235f79436ecd4ec0a97670ab274f473c9d8459072c7ebce77f589d5b972"} Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.424914 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.429416 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b"} Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.429840 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.430519 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.432106 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62"} Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.432140 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875"} Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.432153 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2780f28ee1e9849e5385c526da4715b9c9c38a5cf29cc48fe3fdb1f9d9623237"} Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.433533 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8f519174c734902a334cbe459720aca5396f87aa3f6eb31304b177a3bc9d8530"} Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.466319 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.480359 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.563381 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.586315 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.614935 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.622558 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-r7dpd"] Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.622995 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.626847 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.626902 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.627340 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.627713 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.630622 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.655352 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.676827 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.697698 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.719440 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.727019 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce19e3ac-f68d-40a1-b01a-740a09dc59e1-proxy-tls\") pod \"machine-config-daemon-r7dpd\" (UID: \"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\") " pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.727096 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce19e3ac-f68d-40a1-b01a-740a09dc59e1-mcd-auth-proxy-config\") pod \"machine-config-daemon-r7dpd\" (UID: \"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\") " pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.727129 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce19e3ac-f68d-40a1-b01a-740a09dc59e1-rootfs\") pod \"machine-config-daemon-r7dpd\" (UID: \"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\") " pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.727152 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpb56\" (UniqueName: \"kubernetes.io/projected/ce19e3ac-f68d-40a1-b01a-740a09dc59e1-kube-api-access-hpb56\") pod \"machine-config-daemon-r7dpd\" (UID: \"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\") " pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.735377 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.750821 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.765872 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.778457 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.794192 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.810627 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.827605 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:20:18 crc kubenswrapper[4784]: E0123 06:20:18.827742 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:20:20.82772263 +0000 UTC m=+24.060230604 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.827888 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce19e3ac-f68d-40a1-b01a-740a09dc59e1-rootfs\") pod \"machine-config-daemon-r7dpd\" (UID: \"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\") " pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.827922 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpb56\" (UniqueName: \"kubernetes.io/projected/ce19e3ac-f68d-40a1-b01a-740a09dc59e1-kube-api-access-hpb56\") pod \"machine-config-daemon-r7dpd\" (UID: \"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\") " pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.828100 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce19e3ac-f68d-40a1-b01a-740a09dc59e1-rootfs\") pod \"machine-config-daemon-r7dpd\" (UID: \"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\") " pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.828284 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce19e3ac-f68d-40a1-b01a-740a09dc59e1-proxy-tls\") pod \"machine-config-daemon-r7dpd\" (UID: \"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\") " pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.828360 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce19e3ac-f68d-40a1-b01a-740a09dc59e1-mcd-auth-proxy-config\") pod \"machine-config-daemon-r7dpd\" (UID: \"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\") " pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.829075 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce19e3ac-f68d-40a1-b01a-740a09dc59e1-mcd-auth-proxy-config\") pod \"machine-config-daemon-r7dpd\" (UID: \"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\") " pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.829810 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.842140 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce19e3ac-f68d-40a1-b01a-740a09dc59e1-proxy-tls\") pod \"machine-config-daemon-r7dpd\" (UID: \"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\") " pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.845360 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpb56\" (UniqueName: \"kubernetes.io/projected/ce19e3ac-f68d-40a1-b01a-740a09dc59e1-kube-api-access-hpb56\") pod \"machine-config-daemon-r7dpd\" (UID: \"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\") " pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.848916 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.881326 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.910635 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.925007 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.929445 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.929519 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.929545 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.929597 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:18 crc kubenswrapper[4784]: E0123 06:20:18.929690 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:20:18 crc kubenswrapper[4784]: E0123 06:20:18.929715 4784 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:20:18 crc kubenswrapper[4784]: E0123 06:20:18.929858 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:20.929833634 +0000 UTC m=+24.162341608 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:20:18 crc kubenswrapper[4784]: E0123 06:20:18.929729 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:20:18 crc kubenswrapper[4784]: E0123 06:20:18.929932 4784 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:18 crc kubenswrapper[4784]: E0123 06:20:18.929711 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:20:18 crc kubenswrapper[4784]: E0123 06:20:18.930015 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:20.929989388 +0000 UTC m=+24.162497532 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:18 crc kubenswrapper[4784]: E0123 06:20:18.930031 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:20:18 crc kubenswrapper[4784]: E0123 06:20:18.930050 4784 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:18 crc kubenswrapper[4784]: E0123 06:20:18.929733 4784 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:20:18 crc kubenswrapper[4784]: E0123 06:20:18.930088 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:20.93007967 +0000 UTC m=+24.162587644 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:18 crc kubenswrapper[4784]: E0123 06:20:18.930161 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:20.930147472 +0000 UTC m=+24.162655446 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:20:18 crc kubenswrapper[4784]: I0123 06:20:18.944385 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.009414 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-f9zpg"] Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.009936 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-f9zpg" Jan 23 06:20:19 crc kubenswrapper[4784]: W0123 06:20:19.011794 4784 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 23 06:20:19 crc kubenswrapper[4784]: E0123 06:20:19.011851 4784 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.011963 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-6ts88"] Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.012460 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.012533 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.012773 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.015655 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.015725 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.016022 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9652h"] Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.015787 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.015916 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.015960 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.017217 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.018557 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-8cjm4"] Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.019013 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.020441 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.020467 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.020742 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.020787 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.021728 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.022333 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.022990 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.023806 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.024289 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.031719 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec6438ba-1338-40e2-9746-8cd62c5d0ce4-hosts-file\") pod \"node-resolver-f9zpg\" (UID: \"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\") " pod="openshift-dns/node-resolver-f9zpg" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.031808 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/86ce0358-1c71-4b17-80b8-0c930b5356de-os-release\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.031838 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwwdc\" (UniqueName: \"kubernetes.io/projected/ec6438ba-1338-40e2-9746-8cd62c5d0ce4-kube-api-access-bwwdc\") pod \"node-resolver-f9zpg\" (UID: \"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\") " pod="openshift-dns/node-resolver-f9zpg" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.031870 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwvzk\" (UniqueName: \"kubernetes.io/projected/86ce0358-1c71-4b17-80b8-0c930b5356de-kube-api-access-fwvzk\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.031930 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86ce0358-1c71-4b17-80b8-0c930b5356de-system-cni-dir\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.031985 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/86ce0358-1c71-4b17-80b8-0c930b5356de-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.032050 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/86ce0358-1c71-4b17-80b8-0c930b5356de-cnibin\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.032093 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/86ce0358-1c71-4b17-80b8-0c930b5356de-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.032117 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/86ce0358-1c71-4b17-80b8-0c930b5356de-cni-binary-copy\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.040237 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.056653 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.068129 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.087696 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.109927 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.124654 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.132983 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec6438ba-1338-40e2-9746-8cd62c5d0ce4-hosts-file\") pod \"node-resolver-f9zpg\" (UID: \"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\") " pod="openshift-dns/node-resolver-f9zpg" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133055 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/86ce0358-1c71-4b17-80b8-0c930b5356de-os-release\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133100 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-env-overrides\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133131 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-etc-openvswitch\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133152 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-ovn\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133174 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/76b58650-2600-48a5-b11e-2ed4503cc6b2-cni-binary-copy\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133195 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-var-lib-openvswitch\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133234 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-run-ovn-kubernetes\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133260 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-os-release\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133297 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/86ce0358-1c71-4b17-80b8-0c930b5356de-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133329 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-systemd-units\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133406 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5278\" (UniqueName: \"kubernetes.io/projected/73ef0442-94bc-46f2-a551-15b59d1a5cf0-kube-api-access-j5278\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133541 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/76b58650-2600-48a5-b11e-2ed4503cc6b2-multus-daemon-config\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133660 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-slash\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133671 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/86ce0358-1c71-4b17-80b8-0c930b5356de-os-release\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133767 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-log-socket\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133697 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec6438ba-1338-40e2-9746-8cd62c5d0ce4-hosts-file\") pod \"node-resolver-f9zpg\" (UID: \"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\") " pod="openshift-dns/node-resolver-f9zpg" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133936 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrvj\" (UniqueName: \"kubernetes.io/projected/76b58650-2600-48a5-b11e-2ed4503cc6b2-kube-api-access-nhrvj\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.133999 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-multus-socket-dir-parent\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.134171 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/86ce0358-1c71-4b17-80b8-0c930b5356de-cni-binary-copy\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.134857 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-var-lib-kubelet\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.134905 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-run-multus-certs\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.134941 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovnkube-script-lib\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.134952 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/86ce0358-1c71-4b17-80b8-0c930b5356de-cni-binary-copy\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.134972 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-system-cni-dir\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.134263 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/86ce0358-1c71-4b17-80b8-0c930b5356de-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135032 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwwdc\" (UniqueName: \"kubernetes.io/projected/ec6438ba-1338-40e2-9746-8cd62c5d0ce4-kube-api-access-bwwdc\") pod \"node-resolver-f9zpg\" (UID: \"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\") " pod="openshift-dns/node-resolver-f9zpg" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135104 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwvzk\" (UniqueName: \"kubernetes.io/projected/86ce0358-1c71-4b17-80b8-0c930b5356de-kube-api-access-fwvzk\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135138 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovn-node-metrics-cert\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135170 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-cnibin\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135204 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86ce0358-1c71-4b17-80b8-0c930b5356de-system-cni-dir\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135234 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-cni-netd\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135262 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-multus-cni-dir\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135292 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-run-k8s-cni-cncf-io\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135296 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/86ce0358-1c71-4b17-80b8-0c930b5356de-system-cni-dir\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135320 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-hostroot\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135389 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/86ce0358-1c71-4b17-80b8-0c930b5356de-cnibin\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135461 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-openvswitch\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135508 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-cni-bin\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135528 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-var-lib-cni-multus\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135454 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/86ce0358-1c71-4b17-80b8-0c930b5356de-cnibin\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135606 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-kubelet\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135629 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovnkube-config\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135666 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-multus-conf-dir\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135691 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/86ce0358-1c71-4b17-80b8-0c930b5356de-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135712 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-run-netns\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135770 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-node-log\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135799 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135817 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-var-lib-cni-bin\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135857 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-etc-kubernetes\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135879 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-systemd\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.135896 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-run-netns\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.136210 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/86ce0358-1c71-4b17-80b8-0c930b5356de-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.140231 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.155635 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwvzk\" (UniqueName: \"kubernetes.io/projected/86ce0358-1c71-4b17-80b8-0c930b5356de-kube-api-access-fwvzk\") pod \"multus-additional-cni-plugins-6ts88\" (UID: \"86ce0358-1c71-4b17-80b8-0c930b5356de\") " pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.162083 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.175124 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.188932 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.190853 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 20:42:22.262228721 +0000 UTC Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.218013 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.234188 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236388 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-cnibin\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236445 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-cni-netd\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236468 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-multus-cni-dir\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236491 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-run-k8s-cni-cncf-io\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236511 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-hostroot\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236531 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-openvswitch\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236537 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-cni-netd\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236553 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-cni-bin\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236576 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-var-lib-cni-multus\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236570 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-cnibin\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236634 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-var-lib-cni-multus\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236652 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-kubelet\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236678 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovnkube-config\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236686 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-openvswitch\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236699 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-multus-conf-dir\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236707 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-run-k8s-cni-cncf-io\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236721 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-cni-bin\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236727 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-run-netns\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236768 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-node-log\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236773 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-kubelet\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236796 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236666 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-hostroot\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236828 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-var-lib-cni-bin\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236834 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-node-log\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236848 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-etc-kubernetes\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236869 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-run-netns\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236807 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-multus-conf-dir\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236789 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-multus-cni-dir\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236873 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-systemd\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236924 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-etc-kubernetes\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236899 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236943 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-run-netns\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236899 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-systemd\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236969 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-env-overrides\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236983 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-run-netns\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236994 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-ovn\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237018 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/76b58650-2600-48a5-b11e-2ed4503cc6b2-cni-binary-copy\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237042 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-etc-openvswitch\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237060 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-var-lib-openvswitch\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237089 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-run-ovn-kubernetes\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237106 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-os-release\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237125 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5278\" (UniqueName: \"kubernetes.io/projected/73ef0442-94bc-46f2-a551-15b59d1a5cf0-kube-api-access-j5278\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237149 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/76b58650-2600-48a5-b11e-2ed4503cc6b2-multus-daemon-config\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237181 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-systemd-units\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237203 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhrvj\" (UniqueName: \"kubernetes.io/projected/76b58650-2600-48a5-b11e-2ed4503cc6b2-kube-api-access-nhrvj\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237222 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-slash\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237241 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-log-socket\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237260 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-multus-socket-dir-parent\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237285 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-run-multus-certs\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237304 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-var-lib-kubelet\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237324 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovnkube-script-lib\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237344 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-system-cni-dir\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237375 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovn-node-metrics-cert\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237531 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-env-overrides\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237595 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-multus-socket-dir-parent\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237641 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-systemd-units\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237741 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovnkube-config\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237827 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-var-lib-kubelet\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.237863 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-run-multus-certs\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.238055 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-slash\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.238122 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-log-socket\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.238153 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-etc-openvswitch\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.238183 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-ovn\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.236923 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-host-var-lib-cni-bin\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.238301 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/76b58650-2600-48a5-b11e-2ed4503cc6b2-multus-daemon-config\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.238412 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-run-ovn-kubernetes\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.238481 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-system-cni-dir\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.238481 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/76b58650-2600-48a5-b11e-2ed4503cc6b2-os-release\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.238414 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-var-lib-openvswitch\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.238841 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/76b58650-2600-48a5-b11e-2ed4503cc6b2-cni-binary-copy\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.239146 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovnkube-script-lib\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.242220 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovn-node-metrics-cert\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.253611 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.253691 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.253710 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:19 crc kubenswrapper[4784]: E0123 06:20:19.254098 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:19 crc kubenswrapper[4784]: E0123 06:20:19.254200 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:19 crc kubenswrapper[4784]: E0123 06:20:19.254375 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.258535 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.259165 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.260656 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.261378 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.262476 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.263033 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.263598 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.264615 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.265393 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.267412 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.267969 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.268431 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5278\" (UniqueName: \"kubernetes.io/projected/73ef0442-94bc-46f2-a551-15b59d1a5cf0-kube-api-access-j5278\") pod \"ovnkube-node-9652h\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.269371 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.269879 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.270910 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.271867 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.272618 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.273736 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.274316 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.275040 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.276509 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.277482 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.278627 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.279861 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.280851 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.281522 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.282189 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.282902 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.283448 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.285152 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.285957 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.288364 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhrvj\" (UniqueName: \"kubernetes.io/projected/76b58650-2600-48a5-b11e-2ed4503cc6b2-kube-api-access-nhrvj\") pod \"multus-8cjm4\" (UID: \"76b58650-2600-48a5-b11e-2ed4503cc6b2\") " pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.289107 4784 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.289260 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.291236 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.294805 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.295244 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.296739 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.297239 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.297604 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.298251 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.299241 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.299930 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.300730 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.301369 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.304094 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.311976 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.312666 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.313859 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.314744 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.315918 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.316442 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.318024 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.318526 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.319144 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.328457 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.329001 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.344973 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6ts88" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.345635 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.352348 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8cjm4" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.359165 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: W0123 06:20:19.378776 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86ce0358_1c71_4b17_80b8_0c930b5356de.slice/crio-2ea38bdaeb177efd702b238460dd4b7d7773283f6f7143f9cd10510368cd3292 WatchSource:0}: Error finding container 2ea38bdaeb177efd702b238460dd4b7d7773283f6f7143f9cd10510368cd3292: Status 404 returned error can't find the container with id 2ea38bdaeb177efd702b238460dd4b7d7773283f6f7143f9cd10510368cd3292 Jan 23 06:20:19 crc kubenswrapper[4784]: W0123 06:20:19.392609 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76b58650_2600_48a5_b11e_2ed4503cc6b2.slice/crio-458ebdc1dfce0dc1481207426c2556ca27ce15876803ab549452bc58bf2cf4dc WatchSource:0}: Error finding container 458ebdc1dfce0dc1481207426c2556ca27ce15876803ab549452bc58bf2cf4dc: Status 404 returned error can't find the container with id 458ebdc1dfce0dc1481207426c2556ca27ce15876803ab549452bc58bf2cf4dc Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.398254 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.418574 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.439612 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.446847 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerStarted","Data":"b274940f6ba2de4b146b58f369eb7cdc4db634d2d13d25729dcc30755c556f8e"} Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.450092 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda"} Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.450136 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b"} Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.450148 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"be676e7af433a7a47a556bf944ef4a930aead837a2f4cc7b9f6d25bbf7224d70"} Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.451618 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8cjm4" event={"ID":"76b58650-2600-48a5-b11e-2ed4503cc6b2","Type":"ContainerStarted","Data":"458ebdc1dfce0dc1481207426c2556ca27ce15876803ab549452bc58bf2cf4dc"} Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.453118 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" event={"ID":"86ce0358-1c71-4b17-80b8-0c930b5356de","Type":"ContainerStarted","Data":"2ea38bdaeb177efd702b238460dd4b7d7773283f6f7143f9cd10510368cd3292"} Jan 23 06:20:19 crc kubenswrapper[4784]: E0123 06:20:19.464385 4784 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.470449 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.491313 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.507959 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.531681 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.546964 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.561877 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.578914 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.592911 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.617537 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.633655 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.657921 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.674353 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.693483 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.707117 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.734062 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.753659 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.766199 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.782405 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.801117 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.816112 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.832881 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.833830 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.840386 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwwdc\" (UniqueName: \"kubernetes.io/projected/ec6438ba-1338-40e2-9746-8cd62c5d0ce4-kube-api-access-bwwdc\") pod \"node-resolver-f9zpg\" (UID: \"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\") " pod="openshift-dns/node-resolver-f9zpg" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.849093 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:19Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:19 crc kubenswrapper[4784]: I0123 06:20:19.926043 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-f9zpg" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.059386 4784 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.061510 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.061536 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.061545 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.061640 4784 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.069879 4784 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.072424 4784 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.075660 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.075729 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.075744 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.075782 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.075801 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:20Z","lastTransitionTime":"2026-01-23T06:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.101004 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.104427 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.104484 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.104498 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.104518 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.104532 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:20Z","lastTransitionTime":"2026-01-23T06:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.119585 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.123669 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.123727 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.123741 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.123806 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.123829 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:20Z","lastTransitionTime":"2026-01-23T06:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.136386 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.140100 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.140132 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.140142 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.140158 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.140168 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:20Z","lastTransitionTime":"2026-01-23T06:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.154738 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.160169 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.160234 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.160248 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.160270 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.160285 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:20Z","lastTransitionTime":"2026-01-23T06:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.175397 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.175541 4784 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.178052 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.178109 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.178125 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.178142 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.178155 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:20Z","lastTransitionTime":"2026-01-23T06:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.191847 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:14:28.031552366 +0000 UTC Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.282854 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.282893 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.282906 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.282927 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.282950 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:20Z","lastTransitionTime":"2026-01-23T06:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.387540 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.387977 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.387991 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.388009 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.388024 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:20Z","lastTransitionTime":"2026-01-23T06:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.409535 4784 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.458240 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8cjm4" event={"ID":"76b58650-2600-48a5-b11e-2ed4503cc6b2","Type":"ContainerStarted","Data":"373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953"} Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.460434 4784 generic.go:334] "Generic (PLEG): container finished" podID="86ce0358-1c71-4b17-80b8-0c930b5356de" containerID="ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63" exitCode=0 Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.460500 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" event={"ID":"86ce0358-1c71-4b17-80b8-0c930b5356de","Type":"ContainerDied","Data":"ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63"} Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.462106 4784 generic.go:334] "Generic (PLEG): container finished" podID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerID="9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e" exitCode=0 Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.462199 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerDied","Data":"9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e"} Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.463860 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-f9zpg" event={"ID":"ec6438ba-1338-40e2-9746-8cd62c5d0ce4","Type":"ContainerStarted","Data":"693b86a211d9034bfc713388cb378d36a07ee62bcdb7d3c18e63ef40c20f2728"} Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.494073 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.494133 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.494144 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.494159 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.494171 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:20Z","lastTransitionTime":"2026-01-23T06:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.505504 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.547602 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.585315 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.599790 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.599832 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.599844 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.599862 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.599875 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:20Z","lastTransitionTime":"2026-01-23T06:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.612244 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.627613 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.643786 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.664781 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.693764 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.703730 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.703809 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.703827 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.703848 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.703863 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:20Z","lastTransitionTime":"2026-01-23T06:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.709216 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.723563 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.741044 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.758222 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.772900 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.807738 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.807832 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.807847 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.807874 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.807735 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.807890 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:20Z","lastTransitionTime":"2026-01-23T06:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.825700 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.858583 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.859802 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.860117 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:20:24.860089956 +0000 UTC m=+28.092597930 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.880619 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.894579 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.911953 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.914002 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.914053 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.914066 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.914085 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.914100 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:20Z","lastTransitionTime":"2026-01-23T06:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.939404 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.958085 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.960546 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.960607 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.960641 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.960662 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.960773 4784 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.960826 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:24.960810127 +0000 UTC m=+28.193318101 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.960921 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.960941 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.960955 4784 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.960988 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:24.960977791 +0000 UTC m=+28.193485765 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.961045 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.961058 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.961067 4784 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.961095 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:24.961087164 +0000 UTC m=+28.193595138 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.961168 4784 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:20:20 crc kubenswrapper[4784]: E0123 06:20:20.961197 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:24.961188557 +0000 UTC m=+28.193696531 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:20:20 crc kubenswrapper[4784]: I0123 06:20:20.972642 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.004713 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.010797 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-9bs27"] Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.011187 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9bs27" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.013270 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.013583 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.014164 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.014393 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.025280 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.025328 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.025320 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.025348 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.025368 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.025379 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:21Z","lastTransitionTime":"2026-01-23T06:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.047305 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.061172 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nnbq\" (UniqueName: \"kubernetes.io/projected/294147c4-bce0-4cd5-99bf-d6d63b068c6f-kube-api-access-7nnbq\") pod \"node-ca-9bs27\" (UID: \"294147c4-bce0-4cd5-99bf-d6d63b068c6f\") " pod="openshift-image-registry/node-ca-9bs27" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.061253 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/294147c4-bce0-4cd5-99bf-d6d63b068c6f-serviceca\") pod \"node-ca-9bs27\" (UID: \"294147c4-bce0-4cd5-99bf-d6d63b068c6f\") " pod="openshift-image-registry/node-ca-9bs27" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.061384 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/294147c4-bce0-4cd5-99bf-d6d63b068c6f-host\") pod \"node-ca-9bs27\" (UID: \"294147c4-bce0-4cd5-99bf-d6d63b068c6f\") " pod="openshift-image-registry/node-ca-9bs27" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.061662 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.078651 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.094649 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.127995 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.128042 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.128052 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.128068 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.128078 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:21Z","lastTransitionTime":"2026-01-23T06:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.162202 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/294147c4-bce0-4cd5-99bf-d6d63b068c6f-serviceca\") pod \"node-ca-9bs27\" (UID: \"294147c4-bce0-4cd5-99bf-d6d63b068c6f\") " pod="openshift-image-registry/node-ca-9bs27" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.162285 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/294147c4-bce0-4cd5-99bf-d6d63b068c6f-host\") pod \"node-ca-9bs27\" (UID: \"294147c4-bce0-4cd5-99bf-d6d63b068c6f\") " pod="openshift-image-registry/node-ca-9bs27" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.162323 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nnbq\" (UniqueName: \"kubernetes.io/projected/294147c4-bce0-4cd5-99bf-d6d63b068c6f-kube-api-access-7nnbq\") pod \"node-ca-9bs27\" (UID: \"294147c4-bce0-4cd5-99bf-d6d63b068c6f\") " pod="openshift-image-registry/node-ca-9bs27" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.162449 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/294147c4-bce0-4cd5-99bf-d6d63b068c6f-host\") pod \"node-ca-9bs27\" (UID: \"294147c4-bce0-4cd5-99bf-d6d63b068c6f\") " pod="openshift-image-registry/node-ca-9bs27" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.163869 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/294147c4-bce0-4cd5-99bf-d6d63b068c6f-serviceca\") pod \"node-ca-9bs27\" (UID: \"294147c4-bce0-4cd5-99bf-d6d63b068c6f\") " pod="openshift-image-registry/node-ca-9bs27" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.183778 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nnbq\" (UniqueName: \"kubernetes.io/projected/294147c4-bce0-4cd5-99bf-d6d63b068c6f-kube-api-access-7nnbq\") pod \"node-ca-9bs27\" (UID: \"294147c4-bce0-4cd5-99bf-d6d63b068c6f\") " pod="openshift-image-registry/node-ca-9bs27" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.192569 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:34:57.115504011 +0000 UTC Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.223723 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.229974 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.230001 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.230010 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.230027 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.230037 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:21Z","lastTransitionTime":"2026-01-23T06:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.238962 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.256836 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:21 crc kubenswrapper[4784]: E0123 06:20:21.257468 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.256923 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:21 crc kubenswrapper[4784]: E0123 06:20:21.257574 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.256878 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:21 crc kubenswrapper[4784]: E0123 06:20:21.257657 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.265816 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.299022 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.317408 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.331101 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9bs27" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.333326 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.333390 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.333404 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.333425 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.333438 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:21Z","lastTransitionTime":"2026-01-23T06:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.342626 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: W0123 06:20:21.346304 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod294147c4_bce0_4cd5_99bf_d6d63b068c6f.slice/crio-501dfa5bee756703af1d08e411043db13fa1f3fde000a38a05d372cec4347d04 WatchSource:0}: Error finding container 501dfa5bee756703af1d08e411043db13fa1f3fde000a38a05d372cec4347d04: Status 404 returned error can't find the container with id 501dfa5bee756703af1d08e411043db13fa1f3fde000a38a05d372cec4347d04 Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.360420 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.377455 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.405308 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.420105 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.434856 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.438922 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.438956 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.438972 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.438994 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.439009 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:21Z","lastTransitionTime":"2026-01-23T06:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.453919 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.469286 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.474918 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerStarted","Data":"47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.474968 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerStarted","Data":"90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.474982 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerStarted","Data":"4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.474995 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerStarted","Data":"b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.477507 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-f9zpg" event={"ID":"ec6438ba-1338-40e2-9746-8cd62c5d0ce4","Type":"ContainerStarted","Data":"6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.481113 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9bs27" event={"ID":"294147c4-bce0-4cd5-99bf-d6d63b068c6f","Type":"ContainerStarted","Data":"501dfa5bee756703af1d08e411043db13fa1f3fde000a38a05d372cec4347d04"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.482482 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.484239 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" event={"ID":"86ce0358-1c71-4b17-80b8-0c930b5356de","Type":"ContainerStarted","Data":"13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.486301 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.497653 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.511320 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.524796 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.540258 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.543084 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.543130 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.543145 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.543167 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.543181 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:21Z","lastTransitionTime":"2026-01-23T06:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.553490 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.567077 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.577994 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.593125 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.607236 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.630559 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.647179 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.647380 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.647472 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.647561 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.647649 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:21Z","lastTransitionTime":"2026-01-23T06:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.653743 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.689541 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.723927 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.750009 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.750062 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.750075 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.750093 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.750105 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:21Z","lastTransitionTime":"2026-01-23T06:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.772925 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.810560 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.847604 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:21Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.852789 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.852849 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.852865 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.852886 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.852901 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:21Z","lastTransitionTime":"2026-01-23T06:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.955343 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.955383 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.955392 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.955406 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:21 crc kubenswrapper[4784]: I0123 06:20:21.955417 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:21Z","lastTransitionTime":"2026-01-23T06:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.058173 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.058236 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.058250 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.058270 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.058284 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:22Z","lastTransitionTime":"2026-01-23T06:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.179173 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.179228 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.179239 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.179260 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.179272 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:22Z","lastTransitionTime":"2026-01-23T06:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.192942 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:54:41.172980514 +0000 UTC Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.281708 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.281800 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.281813 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.281835 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.281852 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:22Z","lastTransitionTime":"2026-01-23T06:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.384470 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.384504 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.384513 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.384527 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.384539 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:22Z","lastTransitionTime":"2026-01-23T06:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.488875 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.489439 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.489453 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.489473 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.489488 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:22Z","lastTransitionTime":"2026-01-23T06:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.497989 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerStarted","Data":"435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a"} Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.498334 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerStarted","Data":"f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645"} Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.499372 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9bs27" event={"ID":"294147c4-bce0-4cd5-99bf-d6d63b068c6f","Type":"ContainerStarted","Data":"8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e"} Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.501529 4784 generic.go:334] "Generic (PLEG): container finished" podID="86ce0358-1c71-4b17-80b8-0c930b5356de" containerID="13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701" exitCode=0 Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.502329 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" event={"ID":"86ce0358-1c71-4b17-80b8-0c930b5356de","Type":"ContainerDied","Data":"13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701"} Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.515419 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.531543 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.548651 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.567391 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.589675 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.597184 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.597240 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.597256 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.597276 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.597289 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:22Z","lastTransitionTime":"2026-01-23T06:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.615029 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.632113 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.650321 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.666955 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.680301 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.693654 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.700707 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.700764 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.700783 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.700802 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.700815 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:22Z","lastTransitionTime":"2026-01-23T06:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.716286 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.730883 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.745517 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.760911 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.776887 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.792479 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.804641 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.804714 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.804734 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.804798 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.804821 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:22Z","lastTransitionTime":"2026-01-23T06:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.806899 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.828620 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.845717 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.861517 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.877318 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.892045 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.908879 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.909048 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.909110 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.909126 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.909146 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.909161 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:22Z","lastTransitionTime":"2026-01-23T06:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.924097 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.939500 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.955951 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:22 crc kubenswrapper[4784]: I0123 06:20:22.978911 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.011265 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.012728 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.012821 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.012842 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.012870 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.012895 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:23Z","lastTransitionTime":"2026-01-23T06:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.048347 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.115818 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.115902 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.115918 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.115944 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.115958 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:23Z","lastTransitionTime":"2026-01-23T06:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.193926 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 19:42:56.630090345 +0000 UTC Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.219460 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.219508 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.219520 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.219539 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.219556 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:23Z","lastTransitionTime":"2026-01-23T06:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.253010 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.253083 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.253143 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:23 crc kubenswrapper[4784]: E0123 06:20:23.253258 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:23 crc kubenswrapper[4784]: E0123 06:20:23.253564 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:23 crc kubenswrapper[4784]: E0123 06:20:23.253455 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.323364 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.323414 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.323425 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.323443 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.323454 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:23Z","lastTransitionTime":"2026-01-23T06:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.426695 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.426780 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.426795 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.426814 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.426828 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:23Z","lastTransitionTime":"2026-01-23T06:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.508991 4784 generic.go:334] "Generic (PLEG): container finished" podID="86ce0358-1c71-4b17-80b8-0c930b5356de" containerID="06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e" exitCode=0 Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.509096 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" event={"ID":"86ce0358-1c71-4b17-80b8-0c930b5356de","Type":"ContainerDied","Data":"06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e"} Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.529873 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.529948 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.529971 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.530000 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.530019 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:23Z","lastTransitionTime":"2026-01-23T06:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.530679 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.551524 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.572483 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.587820 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.602353 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.617235 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.633169 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.633225 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.633237 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.633258 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.633271 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:23Z","lastTransitionTime":"2026-01-23T06:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.637202 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.655230 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.677846 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.698807 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.714142 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.731950 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.735574 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.735610 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.735620 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.735635 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.735646 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:23Z","lastTransitionTime":"2026-01-23T06:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.752131 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.764560 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.791338 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.838070 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.838113 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.838123 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.838141 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.838152 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:23Z","lastTransitionTime":"2026-01-23T06:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.941719 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.941811 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.941830 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.941854 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:23 crc kubenswrapper[4784]: I0123 06:20:23.941873 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:23Z","lastTransitionTime":"2026-01-23T06:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.046158 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.046440 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.046465 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.046499 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.046523 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:24Z","lastTransitionTime":"2026-01-23T06:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.150108 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.150161 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.150177 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.150194 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.150205 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:24Z","lastTransitionTime":"2026-01-23T06:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.194982 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:01:30.447558553 +0000 UTC Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.253857 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.253916 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.253931 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.253974 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.253988 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:24Z","lastTransitionTime":"2026-01-23T06:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.358115 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.358193 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.358218 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.358260 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.358288 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:24Z","lastTransitionTime":"2026-01-23T06:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.461852 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.461908 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.461922 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.461940 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.461954 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:24Z","lastTransitionTime":"2026-01-23T06:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.525099 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerStarted","Data":"9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787"} Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.527850 4784 generic.go:334] "Generic (PLEG): container finished" podID="86ce0358-1c71-4b17-80b8-0c930b5356de" containerID="45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513" exitCode=0 Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.527932 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" event={"ID":"86ce0358-1c71-4b17-80b8-0c930b5356de","Type":"ContainerDied","Data":"45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513"} Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.545530 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.562663 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.565019 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.565049 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.565059 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.565075 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.565085 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:24Z","lastTransitionTime":"2026-01-23T06:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.577762 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.594890 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.616075 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.641088 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.658279 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.668696 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.668807 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.668822 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.668842 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.668856 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:24Z","lastTransitionTime":"2026-01-23T06:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.676229 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.693920 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.710284 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.724659 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.748027 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.766339 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.771551 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.771596 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.771608 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.771627 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.771644 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:24Z","lastTransitionTime":"2026-01-23T06:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.781261 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.799503 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.874712 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.874847 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.874863 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.874890 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.874905 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:24Z","lastTransitionTime":"2026-01-23T06:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.914479 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:20:24 crc kubenswrapper[4784]: E0123 06:20:24.914732 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:20:32.914705788 +0000 UTC m=+36.147213782 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.977671 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.977721 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.977732 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.977779 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:24 crc kubenswrapper[4784]: I0123 06:20:24.977793 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:24Z","lastTransitionTime":"2026-01-23T06:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.015735 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.015867 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.015958 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.016001 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:25 crc kubenswrapper[4784]: E0123 06:20:25.016002 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:20:25 crc kubenswrapper[4784]: E0123 06:20:25.016056 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:20:25 crc kubenswrapper[4784]: E0123 06:20:25.016077 4784 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:25 crc kubenswrapper[4784]: E0123 06:20:25.016116 4784 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:20:25 crc kubenswrapper[4784]: E0123 06:20:25.016173 4784 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:20:25 crc kubenswrapper[4784]: E0123 06:20:25.016232 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:20:25 crc kubenswrapper[4784]: E0123 06:20:25.016314 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:20:25 crc kubenswrapper[4784]: E0123 06:20:25.016337 4784 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:25 crc kubenswrapper[4784]: E0123 06:20:25.016174 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:33.016143497 +0000 UTC m=+36.248651661 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:25 crc kubenswrapper[4784]: E0123 06:20:25.016430 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:33.016415373 +0000 UTC m=+36.248923507 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:20:25 crc kubenswrapper[4784]: E0123 06:20:25.016472 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:33.016439544 +0000 UTC m=+36.248947518 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:20:25 crc kubenswrapper[4784]: E0123 06:20:25.016503 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:33.016493205 +0000 UTC m=+36.249001409 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.081667 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.081734 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.081782 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.081804 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.081822 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:25Z","lastTransitionTime":"2026-01-23T06:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.185721 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.185824 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.185837 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.185873 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.185886 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:25Z","lastTransitionTime":"2026-01-23T06:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.196078 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:01:35.119791828 +0000 UTC Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.253249 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.253299 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.253388 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:25 crc kubenswrapper[4784]: E0123 06:20:25.253405 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:25 crc kubenswrapper[4784]: E0123 06:20:25.253556 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:25 crc kubenswrapper[4784]: E0123 06:20:25.253712 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.323683 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.324143 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.324249 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.324360 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.324455 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:25Z","lastTransitionTime":"2026-01-23T06:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.427547 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.427609 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.427624 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.427648 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.427664 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:25Z","lastTransitionTime":"2026-01-23T06:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.531207 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.531248 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.531258 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.531275 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.531287 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:25Z","lastTransitionTime":"2026-01-23T06:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.536032 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" event={"ID":"86ce0358-1c71-4b17-80b8-0c930b5356de","Type":"ContainerStarted","Data":"ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496"} Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.555734 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:25Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.574267 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:25Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.593013 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:25Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.608242 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:25Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.625381 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:25Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.634524 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.634580 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.634591 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.634616 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.634629 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:25Z","lastTransitionTime":"2026-01-23T06:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.641007 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:25Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.657696 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:25Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.678504 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:25Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.693076 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:25Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.706397 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:25Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.720862 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:25Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.738206 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.738254 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.738264 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.738287 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.738311 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:25Z","lastTransitionTime":"2026-01-23T06:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.740494 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:25Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.754197 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:25Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.766308 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:25Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.779114 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:25Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.841368 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.841407 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.841417 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.841431 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.841446 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:25Z","lastTransitionTime":"2026-01-23T06:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.943803 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.943848 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.943860 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.943877 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:25 crc kubenswrapper[4784]: I0123 06:20:25.943890 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:25Z","lastTransitionTime":"2026-01-23T06:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.048158 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.048204 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.048216 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.048232 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.048243 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:26Z","lastTransitionTime":"2026-01-23T06:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.151415 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.151489 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.151505 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.151529 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.151545 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:26Z","lastTransitionTime":"2026-01-23T06:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.196857 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:49:25.962544712 +0000 UTC Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.255053 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.255146 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.255238 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.255274 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.255297 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:26Z","lastTransitionTime":"2026-01-23T06:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.359421 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.359479 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.359492 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.359532 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.359547 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:26Z","lastTransitionTime":"2026-01-23T06:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.463700 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.463820 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.463843 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.463876 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.463904 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:26Z","lastTransitionTime":"2026-01-23T06:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.566875 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.566935 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.566953 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.566976 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.566994 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:26Z","lastTransitionTime":"2026-01-23T06:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.678094 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.678187 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.678209 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.678240 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.678262 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:26Z","lastTransitionTime":"2026-01-23T06:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.782308 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.782357 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.782370 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.782390 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.782404 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:26Z","lastTransitionTime":"2026-01-23T06:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.885882 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.885935 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.885946 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.885966 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.885979 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:26Z","lastTransitionTime":"2026-01-23T06:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.988486 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.988522 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.988531 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.988547 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:26 crc kubenswrapper[4784]: I0123 06:20:26.988557 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:26Z","lastTransitionTime":"2026-01-23T06:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.091315 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.091354 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.091363 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.091376 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.091386 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:27Z","lastTransitionTime":"2026-01-23T06:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.194464 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.194524 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.194538 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.194559 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.194575 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:27Z","lastTransitionTime":"2026-01-23T06:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.197727 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:17:31.307885356 +0000 UTC Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.253657 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.254089 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.254039 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:27 crc kubenswrapper[4784]: E0123 06:20:27.254232 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:27 crc kubenswrapper[4784]: E0123 06:20:27.254391 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:27 crc kubenswrapper[4784]: E0123 06:20:27.254486 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.276947 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.291261 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.297236 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.297489 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.297500 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.297519 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.297532 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:27Z","lastTransitionTime":"2026-01-23T06:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.310941 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.322637 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.349948 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.368828 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.383533 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.399558 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.400075 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.400206 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.400477 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.400550 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.400632 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:27Z","lastTransitionTime":"2026-01-23T06:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.417193 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.434657 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.448109 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.469087 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.485315 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.503738 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.503818 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.503833 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.503851 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.503864 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:27Z","lastTransitionTime":"2026-01-23T06:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.506612 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.525043 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.547483 4784 generic.go:334] "Generic (PLEG): container finished" podID="86ce0358-1c71-4b17-80b8-0c930b5356de" containerID="ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496" exitCode=0 Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.547512 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" event={"ID":"86ce0358-1c71-4b17-80b8-0c930b5356de","Type":"ContainerDied","Data":"ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496"} Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.552305 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerStarted","Data":"ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437"} Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.552623 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.575165 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.597408 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.606056 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.606116 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.606133 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.606154 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.606169 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:27Z","lastTransitionTime":"2026-01-23T06:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.606866 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.612676 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.630157 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.642317 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.662858 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.679595 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.696319 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.697677 4784 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.709892 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.709951 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.709964 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.709988 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.710003 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:27Z","lastTransitionTime":"2026-01-23T06:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.713121 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.728263 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.742794 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.762918 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.778857 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.790464 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.806326 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.813929 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.813986 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.813999 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.814021 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.814035 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:27Z","lastTransitionTime":"2026-01-23T06:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.821461 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.834735 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.851566 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.867044 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.882912 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.899108 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.957175 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.957228 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.957242 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.957262 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.957275 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:27Z","lastTransitionTime":"2026-01-23T06:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.960338 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:27 crc kubenswrapper[4784]: I0123 06:20:27.980090 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.003529 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.022842 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.049897 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.060490 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.060524 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.060534 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.060554 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.060566 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:28Z","lastTransitionTime":"2026-01-23T06:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.071946 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.088562 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.105160 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.126329 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.163199 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.163252 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.163264 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.163280 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.163293 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:28Z","lastTransitionTime":"2026-01-23T06:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.198602 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 03:57:56.881516416 +0000 UTC Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.266746 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.266874 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.266892 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.266917 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.266937 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:28Z","lastTransitionTime":"2026-01-23T06:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.370126 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.370199 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.370223 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.370299 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.370332 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:28Z","lastTransitionTime":"2026-01-23T06:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.474044 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.474097 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.474111 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.474131 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.474146 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:28Z","lastTransitionTime":"2026-01-23T06:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.561571 4784 generic.go:334] "Generic (PLEG): container finished" podID="86ce0358-1c71-4b17-80b8-0c930b5356de" containerID="26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a" exitCode=0 Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.561636 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" event={"ID":"86ce0358-1c71-4b17-80b8-0c930b5356de","Type":"ContainerDied","Data":"26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a"} Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.561776 4784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.562425 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.585905 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.585947 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.585961 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.585982 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.585994 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:28Z","lastTransitionTime":"2026-01-23T06:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.586851 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.588828 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.609905 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.632926 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.654140 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.669531 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.691187 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.691221 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.691232 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.691245 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.691256 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:28Z","lastTransitionTime":"2026-01-23T06:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.698125 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.714124 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.728344 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.746056 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.760616 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.781522 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.793653 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.793713 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.793725 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.793746 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.793772 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:28Z","lastTransitionTime":"2026-01-23T06:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.799947 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.814647 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.826476 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.848311 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.862719 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.888498 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.898774 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.899162 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.899310 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.899447 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.899591 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:28Z","lastTransitionTime":"2026-01-23T06:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.908575 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.926723 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.944459 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.968827 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:28 crc kubenswrapper[4784]: I0123 06:20:28.985967 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:28Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.002552 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.004152 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.004209 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.004222 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.004241 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.004254 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:29Z","lastTransitionTime":"2026-01-23T06:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.020079 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.035438 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.052876 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.074473 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.096446 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.110709 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.110796 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.110809 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.110831 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.110846 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:29Z","lastTransitionTime":"2026-01-23T06:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.118149 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.139124 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.199270 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 22:33:41.286660195 +0000 UTC Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.214464 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.214511 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.214523 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.214543 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.214558 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:29Z","lastTransitionTime":"2026-01-23T06:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.253125 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.253176 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.253224 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:29 crc kubenswrapper[4784]: E0123 06:20:29.253281 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:29 crc kubenswrapper[4784]: E0123 06:20:29.253388 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:29 crc kubenswrapper[4784]: E0123 06:20:29.253605 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.317076 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.317152 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.317163 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.317181 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.317193 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:29Z","lastTransitionTime":"2026-01-23T06:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.421090 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.421148 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.421160 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.421183 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.421199 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:29Z","lastTransitionTime":"2026-01-23T06:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.524080 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.524143 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.524163 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.524187 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.524203 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:29Z","lastTransitionTime":"2026-01-23T06:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.566587 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovnkube-controller/0.log" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.570047 4784 generic.go:334] "Generic (PLEG): container finished" podID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerID="ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437" exitCode=1 Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.570141 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerDied","Data":"ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437"} Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.571092 4784 scope.go:117] "RemoveContainer" containerID="ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.574029 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" event={"ID":"86ce0358-1c71-4b17-80b8-0c930b5356de","Type":"ContainerStarted","Data":"e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a"} Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.587372 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.607567 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.623718 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.626746 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.626791 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.626810 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.626836 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.626850 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:29Z","lastTransitionTime":"2026-01-23T06:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.644826 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.661313 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.688146 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.710348 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.729249 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.732174 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.732236 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.732252 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.732275 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.732292 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:29Z","lastTransitionTime":"2026-01-23T06:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.750778 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.768500 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.783722 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.807631 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"message\\\":\\\"3 06:20:29.349168 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:29.349141 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:29.349188 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:29.349224 6046 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:20:29.349245 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:29.349294 6046 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:20:29.349499 6046 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.349656 6046 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350072 6046 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350335 6046 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350552 6046 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.822984 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.835332 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.835387 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.835397 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.835417 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.835429 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:29Z","lastTransitionTime":"2026-01-23T06:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.837712 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.852544 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.870194 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.887550 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.901092 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.930071 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"message\\\":\\\"3 06:20:29.349168 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:29.349141 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:29.349188 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:29.349224 6046 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:20:29.349245 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:29.349294 6046 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:20:29.349499 6046 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.349656 6046 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350072 6046 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350335 6046 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350552 6046 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.938244 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.938297 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.938312 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.938334 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.938351 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:29Z","lastTransitionTime":"2026-01-23T06:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.950046 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.967912 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:29 crc kubenswrapper[4784]: I0123 06:20:29.986579 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:29Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.003862 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.020786 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.038081 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.041453 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.041497 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.041510 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.041528 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.041542 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:30Z","lastTransitionTime":"2026-01-23T06:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.060077 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.074927 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.103540 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.120317 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.138628 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.144712 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.144793 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.144808 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.144829 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.144845 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:30Z","lastTransitionTime":"2026-01-23T06:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.200159 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 04:52:48.203185858 +0000 UTC Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.248100 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.248190 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.248203 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.248222 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.248235 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:30Z","lastTransitionTime":"2026-01-23T06:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.350974 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.351014 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.351025 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.351041 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.351052 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:30Z","lastTransitionTime":"2026-01-23T06:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.446019 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.446090 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.446103 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.446128 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.446153 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:30Z","lastTransitionTime":"2026-01-23T06:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:30 crc kubenswrapper[4784]: E0123 06:20:30.472019 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.477403 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.477445 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.477455 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.477476 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.477488 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:30Z","lastTransitionTime":"2026-01-23T06:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:30 crc kubenswrapper[4784]: E0123 06:20:30.491931 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.497017 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.497081 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.497101 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.497126 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.497144 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:30Z","lastTransitionTime":"2026-01-23T06:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:30 crc kubenswrapper[4784]: E0123 06:20:30.510956 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.515248 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.515335 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.515352 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.515377 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.515392 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:30Z","lastTransitionTime":"2026-01-23T06:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:30 crc kubenswrapper[4784]: E0123 06:20:30.529256 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.533902 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.533962 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.533979 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.534004 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.534022 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:30Z","lastTransitionTime":"2026-01-23T06:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:30 crc kubenswrapper[4784]: E0123 06:20:30.547906 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: E0123 06:20:30.548088 4784 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.549895 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.549941 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.549954 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.549973 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.549986 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:30Z","lastTransitionTime":"2026-01-23T06:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.580064 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovnkube-controller/0.log" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.583639 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerStarted","Data":"93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f"} Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.583762 4784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.605268 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.613067 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.622615 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.638735 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.652691 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.652774 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.652789 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.652811 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.652826 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:30Z","lastTransitionTime":"2026-01-23T06:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.653344 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.669519 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.698374 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.710138 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.725237 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.740545 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.756174 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.756224 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.756235 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.756257 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.756272 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:30Z","lastTransitionTime":"2026-01-23T06:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.766267 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.784036 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.797884 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.812846 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.833796 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"message\\\":\\\"3 06:20:29.349168 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:29.349141 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:29.349188 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:29.349224 6046 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:20:29.349245 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:29.349294 6046 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:20:29.349499 6046 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.349656 6046 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350072 6046 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350335 6046 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350552 6046 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.852303 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.858576 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.858661 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.858672 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.858693 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.858703 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:30Z","lastTransitionTime":"2026-01-23T06:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.867705 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.888661 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.907259 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.921831 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.933384 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.954786 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"message\\\":\\\"3 06:20:29.349168 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:29.349141 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:29.349188 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:29.349224 6046 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:20:29.349245 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:29.349294 6046 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:20:29.349499 6046 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.349656 6046 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350072 6046 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350335 6046 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350552 6046 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.960974 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.961026 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.961041 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.961063 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.961075 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:30Z","lastTransitionTime":"2026-01-23T06:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.976356 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:30 crc kubenswrapper[4784]: I0123 06:20:30.993824 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.009415 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.024496 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.039096 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.052978 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.063464 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.063813 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.063943 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.064080 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.064188 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:31Z","lastTransitionTime":"2026-01-23T06:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.073743 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.086624 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.103370 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.167354 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.167416 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.167435 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.167458 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.167473 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:31Z","lastTransitionTime":"2026-01-23T06:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.200412 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:25:28.165742985 +0000 UTC Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.252982 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.253010 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.253077 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:31 crc kubenswrapper[4784]: E0123 06:20:31.253206 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:31 crc kubenswrapper[4784]: E0123 06:20:31.253338 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:31 crc kubenswrapper[4784]: E0123 06:20:31.253437 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.271047 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.271144 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.271172 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.271205 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.271233 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:31Z","lastTransitionTime":"2026-01-23T06:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.376054 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.376163 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.376184 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.376209 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.376228 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:31Z","lastTransitionTime":"2026-01-23T06:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.479568 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.479675 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.479703 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.479740 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.479801 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:31Z","lastTransitionTime":"2026-01-23T06:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.583064 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.583113 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.583122 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.583146 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.583157 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:31Z","lastTransitionTime":"2026-01-23T06:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.589314 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovnkube-controller/1.log" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.589974 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovnkube-controller/0.log" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.593346 4784 generic.go:334] "Generic (PLEG): container finished" podID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerID="93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f" exitCode=1 Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.593435 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerDied","Data":"93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f"} Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.593568 4784 scope.go:117] "RemoveContainer" containerID="ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.594399 4784 scope.go:117] "RemoveContainer" containerID="93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f" Jan 23 06:20:31 crc kubenswrapper[4784]: E0123 06:20:31.594604 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.621986 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.639678 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.660781 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.687991 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.688072 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.688094 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.688126 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.688160 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:31Z","lastTransitionTime":"2026-01-23T06:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.696997 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"message\\\":\\\"3 06:20:29.349168 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:29.349141 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:29.349188 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:29.349224 6046 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:20:29.349245 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:29.349294 6046 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:20:29.349499 6046 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.349656 6046 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350072 6046 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350335 6046 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350552 6046 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:31Z\\\",\\\"message\\\":\\\"ow:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 06:20:30.903652 6208 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.720986 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.740027 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.758842 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.776312 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.791374 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.791409 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.791418 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.791435 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.791446 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:31Z","lastTransitionTime":"2026-01-23T06:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.795602 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.809269 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.830059 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.841730 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.857651 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.880847 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.894797 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.894866 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.894882 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.894912 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.894928 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:31Z","lastTransitionTime":"2026-01-23T06:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.898593 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.998167 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.998223 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.998240 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.998261 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:31 crc kubenswrapper[4784]: I0123 06:20:31.998278 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:31Z","lastTransitionTime":"2026-01-23T06:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.101661 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.101717 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.101727 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.101761 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.101798 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:32Z","lastTransitionTime":"2026-01-23T06:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.182332 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5"] Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.183329 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.187490 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.187517 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.201329 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:42:44.946927065 +0000 UTC Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.201845 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrfl9\" (UniqueName: \"kubernetes.io/projected/e8563a82-9f1c-4972-843c-4461fef9994d-kube-api-access-wrfl9\") pod \"ovnkube-control-plane-749d76644c-9q9h5\" (UID: \"e8563a82-9f1c-4972-843c-4461fef9994d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.201898 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8563a82-9f1c-4972-843c-4461fef9994d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9q9h5\" (UID: \"e8563a82-9f1c-4972-843c-4461fef9994d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.201995 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8563a82-9f1c-4972-843c-4461fef9994d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9q9h5\" (UID: \"e8563a82-9f1c-4972-843c-4461fef9994d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.202023 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8563a82-9f1c-4972-843c-4461fef9994d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9q9h5\" (UID: \"e8563a82-9f1c-4972-843c-4461fef9994d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.205030 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.205078 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.205092 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.205118 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.205135 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:32Z","lastTransitionTime":"2026-01-23T06:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.208036 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.224365 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.244055 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.259864 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.277612 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.290517 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.302949 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrfl9\" (UniqueName: \"kubernetes.io/projected/e8563a82-9f1c-4972-843c-4461fef9994d-kube-api-access-wrfl9\") pod \"ovnkube-control-plane-749d76644c-9q9h5\" (UID: \"e8563a82-9f1c-4972-843c-4461fef9994d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.303027 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8563a82-9f1c-4972-843c-4461fef9994d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9q9h5\" (UID: \"e8563a82-9f1c-4972-843c-4461fef9994d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.303076 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8563a82-9f1c-4972-843c-4461fef9994d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9q9h5\" (UID: \"e8563a82-9f1c-4972-843c-4461fef9994d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.303113 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8563a82-9f1c-4972-843c-4461fef9994d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9q9h5\" (UID: \"e8563a82-9f1c-4972-843c-4461fef9994d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.303959 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8563a82-9f1c-4972-843c-4461fef9994d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9q9h5\" (UID: \"e8563a82-9f1c-4972-843c-4461fef9994d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.304538 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8563a82-9f1c-4972-843c-4461fef9994d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9q9h5\" (UID: \"e8563a82-9f1c-4972-843c-4461fef9994d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.307910 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.307984 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.308003 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.308030 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.308053 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:32Z","lastTransitionTime":"2026-01-23T06:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.310411 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8563a82-9f1c-4972-843c-4461fef9994d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9q9h5\" (UID: \"e8563a82-9f1c-4972-843c-4461fef9994d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.321038 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.323533 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrfl9\" (UniqueName: \"kubernetes.io/projected/e8563a82-9f1c-4972-843c-4461fef9994d-kube-api-access-wrfl9\") pod \"ovnkube-control-plane-749d76644c-9q9h5\" (UID: \"e8563a82-9f1c-4972-843c-4461fef9994d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.339334 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.356665 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.373375 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.389131 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.402324 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.410993 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.411214 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.411276 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.411366 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.411438 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:32Z","lastTransitionTime":"2026-01-23T06:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.423326 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce9dfbcf792d7015217c29f37de7ca2720fa5e9beaf6ed74e49447b0a9d99437\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"message\\\":\\\"3 06:20:29.349168 6046 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:29.349141 6046 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:29.349188 6046 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:29.349224 6046 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:20:29.349245 6046 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:29.349294 6046 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:20:29.349499 6046 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.349656 6046 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350072 6046 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350335 6046 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:20:29.350552 6046 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:31Z\\\",\\\"message\\\":\\\"ow:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 06:20:30.903652 6208 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.439076 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.456590 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.472508 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.505414 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.514686 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.514738 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.514772 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.514794 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.514807 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:32Z","lastTransitionTime":"2026-01-23T06:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:32 crc kubenswrapper[4784]: W0123 06:20:32.526103 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8563a82_9f1c_4972_843c_4461fef9994d.slice/crio-f94a49a7aeb5d66cabcdc337105648550624b011711d618e6dcf0fac2015b723 WatchSource:0}: Error finding container f94a49a7aeb5d66cabcdc337105648550624b011711d618e6dcf0fac2015b723: Status 404 returned error can't find the container with id f94a49a7aeb5d66cabcdc337105648550624b011711d618e6dcf0fac2015b723 Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.605636 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovnkube-controller/1.log" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.621094 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.621137 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.621147 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.621166 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.621179 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:32Z","lastTransitionTime":"2026-01-23T06:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.629909 4784 scope.go:117] "RemoveContainer" containerID="93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f" Jan 23 06:20:32 crc kubenswrapper[4784]: E0123 06:20:32.630140 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.636200 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" event={"ID":"e8563a82-9f1c-4972-843c-4461fef9994d","Type":"ContainerStarted","Data":"f94a49a7aeb5d66cabcdc337105648550624b011711d618e6dcf0fac2015b723"} Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.651542 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.671862 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.685359 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.698799 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.714835 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.723705 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.723742 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.723775 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.723794 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.723809 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:32Z","lastTransitionTime":"2026-01-23T06:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.730724 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.745096 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.763149 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.777315 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.798316 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.815100 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.826205 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.826245 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.826256 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.826274 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.826284 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:32Z","lastTransitionTime":"2026-01-23T06:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.830479 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.846377 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.859942 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.872309 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.896986 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:31Z\\\",\\\"message\\\":\\\"ow:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 06:20:30.903652 6208 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.929035 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.929359 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.929368 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.929384 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.929395 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:32Z","lastTransitionTime":"2026-01-23T06:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.945411 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lcdgv"] Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.946149 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:32 crc kubenswrapper[4784]: E0123 06:20:32.946236 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.966914 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:32 crc kubenswrapper[4784]: I0123 06:20:32.994016 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:31Z\\\",\\\"message\\\":\\\"ow:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 06:20:30.903652 6208 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.005728 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.011765 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.011860 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs\") pod \"network-metrics-daemon-lcdgv\" (UID: \"cdf947ef-7279-4d43-854c-d836e0043e5b\") " pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.011899 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls7mf\" (UniqueName: \"kubernetes.io/projected/cdf947ef-7279-4d43-854c-d836e0043e5b-kube-api-access-ls7mf\") pod \"network-metrics-daemon-lcdgv\" (UID: \"cdf947ef-7279-4d43-854c-d836e0043e5b\") " pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.011997 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:20:49.011953936 +0000 UTC m=+52.244461920 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.018913 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.031461 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.032869 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.032905 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.032914 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.032933 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.032944 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:33Z","lastTransitionTime":"2026-01-23T06:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.045464 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.059425 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.074907 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.089430 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.109416 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.113036 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.113120 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs\") pod \"network-metrics-daemon-lcdgv\" (UID: \"cdf947ef-7279-4d43-854c-d836e0043e5b\") " pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.113160 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.113209 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls7mf\" (UniqueName: \"kubernetes.io/projected/cdf947ef-7279-4d43-854c-d836e0043e5b-kube-api-access-ls7mf\") pod \"network-metrics-daemon-lcdgv\" (UID: \"cdf947ef-7279-4d43-854c-d836e0043e5b\") " pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.113252 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.113277 4784 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.113295 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.113306 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.113373 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.113387 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs podName:cdf947ef-7279-4d43-854c-d836e0043e5b nodeName:}" failed. No retries permitted until 2026-01-23 06:20:33.613360915 +0000 UTC m=+36.845868889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs") pod "network-metrics-daemon-lcdgv" (UID: "cdf947ef-7279-4d43-854c-d836e0043e5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.113399 4784 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.113460 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:49.113442947 +0000 UTC m=+52.345950931 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.113472 4784 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.113534 4784 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.113560 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:49.113535799 +0000 UTC m=+52.346043813 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.113690 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:49.113661202 +0000 UTC m=+52.346169166 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.113798 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.113814 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.113830 4784 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.113860 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:20:49.113851726 +0000 UTC m=+52.346359700 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.123334 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.133462 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls7mf\" (UniqueName: \"kubernetes.io/projected/cdf947ef-7279-4d43-854c-d836e0043e5b-kube-api-access-ls7mf\") pod \"network-metrics-daemon-lcdgv\" (UID: \"cdf947ef-7279-4d43-854c-d836e0043e5b\") " pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.135694 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.135741 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.135785 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.135808 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.135822 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:33Z","lastTransitionTime":"2026-01-23T06:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.138465 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.171850 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.202376 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:21:34.12770253 +0000 UTC Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.214614 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.239158 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.239209 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.239223 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.239244 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.239255 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:33Z","lastTransitionTime":"2026-01-23T06:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.253681 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.253770 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.253840 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.253777 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.253952 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.254036 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.259848 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.290600 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.326260 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.342491 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.342656 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.342833 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.342992 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.343123 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:33Z","lastTransitionTime":"2026-01-23T06:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.446658 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.446734 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.446771 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.446803 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.446820 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:33Z","lastTransitionTime":"2026-01-23T06:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.550929 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.550996 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.551018 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.551047 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.551067 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:33Z","lastTransitionTime":"2026-01-23T06:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.619332 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs\") pod \"network-metrics-daemon-lcdgv\" (UID: \"cdf947ef-7279-4d43-854c-d836e0043e5b\") " pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.619643 4784 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:20:33 crc kubenswrapper[4784]: E0123 06:20:33.619813 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs podName:cdf947ef-7279-4d43-854c-d836e0043e5b nodeName:}" failed. No retries permitted until 2026-01-23 06:20:34.619724807 +0000 UTC m=+37.852232811 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs") pod "network-metrics-daemon-lcdgv" (UID: "cdf947ef-7279-4d43-854c-d836e0043e5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.643496 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" event={"ID":"e8563a82-9f1c-4972-843c-4461fef9994d","Type":"ContainerStarted","Data":"fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca"} Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.643581 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" event={"ID":"e8563a82-9f1c-4972-843c-4461fef9994d","Type":"ContainerStarted","Data":"87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83"} Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.655221 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.655313 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.655340 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.655376 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.655396 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:33Z","lastTransitionTime":"2026-01-23T06:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.668031 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.690493 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.710185 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.734970 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.750407 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.758800 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.759116 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.759258 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.759408 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.759555 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:33Z","lastTransitionTime":"2026-01-23T06:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.771091 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.790441 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.812480 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.836368 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.864411 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.864476 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.864511 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.864538 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.864553 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:33Z","lastTransitionTime":"2026-01-23T06:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.868032 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.894010 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.912493 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.946876 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:31Z\\\",\\\"message\\\":\\\"ow:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 06:20:30.903652 6208 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.966500 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.968462 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.968517 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.968533 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.968557 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.968576 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:33Z","lastTransitionTime":"2026-01-23T06:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:33 crc kubenswrapper[4784]: I0123 06:20:33.990281 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:33Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.008806 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:34Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.025044 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:34Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.071596 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.071669 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.071684 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.071706 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.071721 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:34Z","lastTransitionTime":"2026-01-23T06:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.175297 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.175346 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.175356 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.175375 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.175388 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:34Z","lastTransitionTime":"2026-01-23T06:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.202822 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:20:38.913891965 +0000 UTC Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.279176 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.279267 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.279294 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.279326 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.279352 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:34Z","lastTransitionTime":"2026-01-23T06:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.383860 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.383940 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.383953 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.383977 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.383992 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:34Z","lastTransitionTime":"2026-01-23T06:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.487485 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.487559 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.487577 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.487608 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.487629 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:34Z","lastTransitionTime":"2026-01-23T06:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.591704 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.591833 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.591857 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.591888 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.591911 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:34Z","lastTransitionTime":"2026-01-23T06:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.634156 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs\") pod \"network-metrics-daemon-lcdgv\" (UID: \"cdf947ef-7279-4d43-854c-d836e0043e5b\") " pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:34 crc kubenswrapper[4784]: E0123 06:20:34.634473 4784 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:20:34 crc kubenswrapper[4784]: E0123 06:20:34.634900 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs podName:cdf947ef-7279-4d43-854c-d836e0043e5b nodeName:}" failed. No retries permitted until 2026-01-23 06:20:36.63485949 +0000 UTC m=+39.867367494 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs") pod "network-metrics-daemon-lcdgv" (UID: "cdf947ef-7279-4d43-854c-d836e0043e5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.696076 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.696151 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.696190 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.696222 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.696241 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:34Z","lastTransitionTime":"2026-01-23T06:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.800276 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.800360 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.800380 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.800409 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.800430 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:34Z","lastTransitionTime":"2026-01-23T06:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.904614 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.904722 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.904738 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.904794 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:34 crc kubenswrapper[4784]: I0123 06:20:34.904810 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:34Z","lastTransitionTime":"2026-01-23T06:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.008545 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.008607 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.008624 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.008652 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.008669 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:35Z","lastTransitionTime":"2026-01-23T06:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.112319 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.112386 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.112398 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.112421 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.112437 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:35Z","lastTransitionTime":"2026-01-23T06:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.203797 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 16:21:29.683104006 +0000 UTC Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.214917 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.214999 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.215019 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.215050 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.215071 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:35Z","lastTransitionTime":"2026-01-23T06:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.253095 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.253269 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.253101 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.253387 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:35 crc kubenswrapper[4784]: E0123 06:20:35.253335 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:35 crc kubenswrapper[4784]: E0123 06:20:35.253615 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:20:35 crc kubenswrapper[4784]: E0123 06:20:35.253784 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:35 crc kubenswrapper[4784]: E0123 06:20:35.253914 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.319203 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.319289 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.319303 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.319324 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.319338 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:35Z","lastTransitionTime":"2026-01-23T06:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.422696 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.422776 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.422787 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.422805 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.422817 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:35Z","lastTransitionTime":"2026-01-23T06:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.526735 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.526845 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.526856 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.526883 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.526904 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:35Z","lastTransitionTime":"2026-01-23T06:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.629310 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.629369 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.629385 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.629410 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.629424 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:35Z","lastTransitionTime":"2026-01-23T06:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.732219 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.732287 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.732309 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.732336 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.732356 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:35Z","lastTransitionTime":"2026-01-23T06:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.835922 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.835979 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.835996 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.836021 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.836037 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:35Z","lastTransitionTime":"2026-01-23T06:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.939079 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.939139 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.939151 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.939172 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:35 crc kubenswrapper[4784]: I0123 06:20:35.939189 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:35Z","lastTransitionTime":"2026-01-23T06:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.041959 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.042029 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.042042 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.042064 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.042078 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:36Z","lastTransitionTime":"2026-01-23T06:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.145482 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.145556 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.145581 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.145614 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.145640 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:36Z","lastTransitionTime":"2026-01-23T06:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.204444 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:18:33.484658748 +0000 UTC Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.248289 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.248342 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.248351 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.248371 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.248384 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:36Z","lastTransitionTime":"2026-01-23T06:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.351691 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.351797 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.351815 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.351846 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.351872 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:36Z","lastTransitionTime":"2026-01-23T06:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.457000 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.457103 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.457123 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.457152 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.457171 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:36Z","lastTransitionTime":"2026-01-23T06:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.560841 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.560924 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.560943 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.560972 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.560993 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:36Z","lastTransitionTime":"2026-01-23T06:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.660112 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs\") pod \"network-metrics-daemon-lcdgv\" (UID: \"cdf947ef-7279-4d43-854c-d836e0043e5b\") " pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:36 crc kubenswrapper[4784]: E0123 06:20:36.660335 4784 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:20:36 crc kubenswrapper[4784]: E0123 06:20:36.660418 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs podName:cdf947ef-7279-4d43-854c-d836e0043e5b nodeName:}" failed. No retries permitted until 2026-01-23 06:20:40.660394534 +0000 UTC m=+43.892902508 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs") pod "network-metrics-daemon-lcdgv" (UID: "cdf947ef-7279-4d43-854c-d836e0043e5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.663942 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.663978 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.663992 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.664014 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.664028 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:36Z","lastTransitionTime":"2026-01-23T06:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.767609 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.767675 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.767701 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.767724 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.767738 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:36Z","lastTransitionTime":"2026-01-23T06:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.873330 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.873412 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.873431 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.873461 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.873484 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:36Z","lastTransitionTime":"2026-01-23T06:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.977562 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.977659 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.977671 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.977696 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:36 crc kubenswrapper[4784]: I0123 06:20:36.977974 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:36Z","lastTransitionTime":"2026-01-23T06:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.082198 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.082290 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.082316 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.082358 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.082378 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:37Z","lastTransitionTime":"2026-01-23T06:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.185575 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.185629 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.185639 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.185657 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.185670 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:37Z","lastTransitionTime":"2026-01-23T06:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.205143 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:29:02.634879048 +0000 UTC Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.253164 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.253250 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.253293 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.253355 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:37 crc kubenswrapper[4784]: E0123 06:20:37.253629 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:37 crc kubenswrapper[4784]: E0123 06:20:37.253707 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:37 crc kubenswrapper[4784]: E0123 06:20:37.253822 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:37 crc kubenswrapper[4784]: E0123 06:20:37.253517 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.280210 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.289221 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.289295 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.289317 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.289345 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.289363 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:37Z","lastTransitionTime":"2026-01-23T06:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.296867 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.318585 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.338197 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.356182 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.376269 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.391825 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.391868 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.391880 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.391901 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.391913 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:37Z","lastTransitionTime":"2026-01-23T06:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.401435 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:31Z\\\",\\\"message\\\":\\\"ow:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 06:20:30.903652 6208 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.413912 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.426955 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.443580 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.459235 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.480179 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.494966 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.495010 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.495043 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.495062 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.495075 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:37Z","lastTransitionTime":"2026-01-23T06:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.500377 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.515583 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.529820 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.539089 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.554204 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.598594 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.598655 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.598665 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.598687 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.598698 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:37Z","lastTransitionTime":"2026-01-23T06:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.702565 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.702642 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.702664 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.702693 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.702713 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:37Z","lastTransitionTime":"2026-01-23T06:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.805897 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.805955 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.805971 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.805995 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.806010 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:37Z","lastTransitionTime":"2026-01-23T06:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.908772 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.908821 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.908833 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.908851 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:37 crc kubenswrapper[4784]: I0123 06:20:37.908860 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:37Z","lastTransitionTime":"2026-01-23T06:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.011763 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.011814 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.011827 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.011848 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.011867 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:38Z","lastTransitionTime":"2026-01-23T06:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.115234 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.115300 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.115318 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.115348 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.115372 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:38Z","lastTransitionTime":"2026-01-23T06:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.206273 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:39:21.411906737 +0000 UTC Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.219190 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.219247 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.219262 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.219291 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.219309 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:38Z","lastTransitionTime":"2026-01-23T06:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.322693 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.322840 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.322854 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.322870 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.322881 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:38Z","lastTransitionTime":"2026-01-23T06:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.426694 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.426743 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.426775 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.426797 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.426813 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:38Z","lastTransitionTime":"2026-01-23T06:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.530826 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.530885 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.530911 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.530936 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.530965 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:38Z","lastTransitionTime":"2026-01-23T06:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.634731 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.634839 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.634862 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.634898 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.634928 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:38Z","lastTransitionTime":"2026-01-23T06:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.738557 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.738602 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.738615 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.738636 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.738651 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:38Z","lastTransitionTime":"2026-01-23T06:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.841852 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.841917 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.841935 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.841964 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.841982 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:38Z","lastTransitionTime":"2026-01-23T06:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.945471 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.945553 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.945573 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.945598 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:38 crc kubenswrapper[4784]: I0123 06:20:38.945616 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:38Z","lastTransitionTime":"2026-01-23T06:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.048725 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.048815 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.048829 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.048852 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.048872 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:39Z","lastTransitionTime":"2026-01-23T06:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.152367 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.152434 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.152447 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.152470 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.152509 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:39Z","lastTransitionTime":"2026-01-23T06:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.207363 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:40:12.9374278 +0000 UTC Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.252888 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.252913 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:39 crc kubenswrapper[4784]: E0123 06:20:39.253097 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.253013 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.253202 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:39 crc kubenswrapper[4784]: E0123 06:20:39.253296 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:39 crc kubenswrapper[4784]: E0123 06:20:39.253441 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:20:39 crc kubenswrapper[4784]: E0123 06:20:39.253600 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.255213 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.255250 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.255264 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.255280 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.255294 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:39Z","lastTransitionTime":"2026-01-23T06:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.357932 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.357968 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.357977 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.357992 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.358006 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:39Z","lastTransitionTime":"2026-01-23T06:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.461362 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.461440 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.461452 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.461474 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.461503 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:39Z","lastTransitionTime":"2026-01-23T06:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.564891 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.564957 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.564973 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.564999 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.565014 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:39Z","lastTransitionTime":"2026-01-23T06:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.666617 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.666665 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.666676 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.666693 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.666704 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:39Z","lastTransitionTime":"2026-01-23T06:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.769963 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.770028 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.770043 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.770064 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.770078 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:39Z","lastTransitionTime":"2026-01-23T06:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.873995 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.874043 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.874055 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.874075 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.874089 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:39Z","lastTransitionTime":"2026-01-23T06:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.976979 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.977045 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.977063 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.977091 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:39 crc kubenswrapper[4784]: I0123 06:20:39.977112 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:39Z","lastTransitionTime":"2026-01-23T06:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.081486 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.081585 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.081615 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.081667 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.081696 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:40Z","lastTransitionTime":"2026-01-23T06:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.185520 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.185580 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.185598 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.185628 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.185647 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:40Z","lastTransitionTime":"2026-01-23T06:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.207745 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:11:28.549612315 +0000 UTC Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.289944 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.290009 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.290019 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.290039 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.290049 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:40Z","lastTransitionTime":"2026-01-23T06:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.393516 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.393574 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.393588 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.393610 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.393624 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:40Z","lastTransitionTime":"2026-01-23T06:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.496745 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.496854 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.496875 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.496901 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.496935 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:40Z","lastTransitionTime":"2026-01-23T06:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.600671 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.600729 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.600798 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.600827 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.600845 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:40Z","lastTransitionTime":"2026-01-23T06:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.674271 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.674340 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.674357 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.674385 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.674404 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:40Z","lastTransitionTime":"2026-01-23T06:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:40 crc kubenswrapper[4784]: E0123 06:20:40.692223 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.698353 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.698444 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.698471 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.698505 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.698530 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:40Z","lastTransitionTime":"2026-01-23T06:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.708365 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs\") pod \"network-metrics-daemon-lcdgv\" (UID: \"cdf947ef-7279-4d43-854c-d836e0043e5b\") " pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:40 crc kubenswrapper[4784]: E0123 06:20:40.708711 4784 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:20:40 crc kubenswrapper[4784]: E0123 06:20:40.708899 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs podName:cdf947ef-7279-4d43-854c-d836e0043e5b nodeName:}" failed. No retries permitted until 2026-01-23 06:20:48.70883934 +0000 UTC m=+51.941347464 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs") pod "network-metrics-daemon-lcdgv" (UID: "cdf947ef-7279-4d43-854c-d836e0043e5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:20:40 crc kubenswrapper[4784]: E0123 06:20:40.714342 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.718854 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.718936 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.718954 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.718998 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.719013 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:40Z","lastTransitionTime":"2026-01-23T06:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:40 crc kubenswrapper[4784]: E0123 06:20:40.734585 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.740417 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.740481 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.740494 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.740540 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.740559 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:40Z","lastTransitionTime":"2026-01-23T06:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:40 crc kubenswrapper[4784]: E0123 06:20:40.762619 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.767696 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.767815 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.767836 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.767871 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.767892 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:40Z","lastTransitionTime":"2026-01-23T06:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:40 crc kubenswrapper[4784]: E0123 06:20:40.790050 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:40 crc kubenswrapper[4784]: E0123 06:20:40.790211 4784 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.792296 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.792344 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.792359 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.792380 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.792398 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:40Z","lastTransitionTime":"2026-01-23T06:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.895576 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.895649 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.895662 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.895685 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.895701 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:40Z","lastTransitionTime":"2026-01-23T06:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.998450 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.998493 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.998508 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.998530 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:40 crc kubenswrapper[4784]: I0123 06:20:40.998543 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:40Z","lastTransitionTime":"2026-01-23T06:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.102208 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.102278 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.102288 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.102312 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.102327 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:41Z","lastTransitionTime":"2026-01-23T06:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.205533 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.205613 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.205623 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.205644 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.205656 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:41Z","lastTransitionTime":"2026-01-23T06:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.208739 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 00:31:33.858391378 +0000 UTC Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.253447 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.253517 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.253651 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:41 crc kubenswrapper[4784]: E0123 06:20:41.253780 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:41 crc kubenswrapper[4784]: E0123 06:20:41.253925 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.253947 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:41 crc kubenswrapper[4784]: E0123 06:20:41.254099 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:20:41 crc kubenswrapper[4784]: E0123 06:20:41.254281 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.308571 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.308645 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.308659 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.308701 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.308717 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:41Z","lastTransitionTime":"2026-01-23T06:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.412662 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.412715 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.412728 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.412767 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.412787 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:41Z","lastTransitionTime":"2026-01-23T06:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.517127 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.517198 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.517218 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.517241 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.517258 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:41Z","lastTransitionTime":"2026-01-23T06:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.620900 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.620963 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.620975 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.620997 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.621013 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:41Z","lastTransitionTime":"2026-01-23T06:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.724584 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.724661 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.724680 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.724708 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.724731 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:41Z","lastTransitionTime":"2026-01-23T06:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.827861 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.827948 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.827974 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.828009 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.828034 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:41Z","lastTransitionTime":"2026-01-23T06:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.931854 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.931911 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.931923 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.931943 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:41 crc kubenswrapper[4784]: I0123 06:20:41.931957 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:41Z","lastTransitionTime":"2026-01-23T06:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.034954 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.035038 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.035052 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.035075 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.035092 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:42Z","lastTransitionTime":"2026-01-23T06:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.139276 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.139376 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.139395 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.139424 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.139444 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:42Z","lastTransitionTime":"2026-01-23T06:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.209499 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:55:01.568601092 +0000 UTC Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.243051 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.243121 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.243136 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.243162 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.243180 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:42Z","lastTransitionTime":"2026-01-23T06:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.346082 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.346143 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.346182 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.346203 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.346216 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:42Z","lastTransitionTime":"2026-01-23T06:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.450110 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.450169 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.450183 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.450204 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.450242 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:42Z","lastTransitionTime":"2026-01-23T06:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.552795 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.552844 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.552858 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.552878 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.552893 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:42Z","lastTransitionTime":"2026-01-23T06:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.655948 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.656003 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.656012 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.656030 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.656047 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:42Z","lastTransitionTime":"2026-01-23T06:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.758862 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.758917 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.758928 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.758949 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.758961 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:42Z","lastTransitionTime":"2026-01-23T06:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.862925 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.862981 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.862996 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.863018 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.863032 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:42Z","lastTransitionTime":"2026-01-23T06:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.966540 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.966617 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.966642 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.966674 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:42 crc kubenswrapper[4784]: I0123 06:20:42.966698 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:42Z","lastTransitionTime":"2026-01-23T06:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.070737 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.070865 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.070890 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.070917 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.070937 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:43Z","lastTransitionTime":"2026-01-23T06:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.175325 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.175379 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.175395 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.175422 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.175441 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:43Z","lastTransitionTime":"2026-01-23T06:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.209791 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 22:21:33.524046908 +0000 UTC Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.253608 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:43 crc kubenswrapper[4784]: E0123 06:20:43.253833 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.254396 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:43 crc kubenswrapper[4784]: E0123 06:20:43.254493 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.254560 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:43 crc kubenswrapper[4784]: E0123 06:20:43.254620 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.254873 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:43 crc kubenswrapper[4784]: E0123 06:20:43.254946 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.269360 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.278236 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.278304 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.278316 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.278340 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.278354 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:43Z","lastTransitionTime":"2026-01-23T06:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.283160 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.307056 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.326347 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.345060 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.362793 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.379616 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.381223 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.381247 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.381258 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.381307 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.381318 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:43Z","lastTransitionTime":"2026-01-23T06:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.393838 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.413412 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:31Z\\\",\\\"message\\\":\\\"ow:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 06:20:30.903652 6208 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.426822 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.441518 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.455927 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.470252 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.484991 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.485038 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.485064 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.485084 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.485096 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:43Z","lastTransitionTime":"2026-01-23T06:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.485645 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.501759 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.519653 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.537606 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.551951 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.565537 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.589501 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.589565 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.589579 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.589600 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.589652 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:43Z","lastTransitionTime":"2026-01-23T06:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.693203 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.693254 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.693271 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.693297 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.693316 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:43Z","lastTransitionTime":"2026-01-23T06:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.797243 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.797355 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.797385 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.797424 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.797455 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:43Z","lastTransitionTime":"2026-01-23T06:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.861374 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.862409 4784 scope.go:117] "RemoveContainer" containerID="93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.901338 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.901879 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.902158 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.902379 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:43 crc kubenswrapper[4784]: I0123 06:20:43.902582 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:43Z","lastTransitionTime":"2026-01-23T06:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.005977 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.006050 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.006069 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.006149 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.006172 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:44Z","lastTransitionTime":"2026-01-23T06:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.109643 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.109723 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.109778 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.109814 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.109836 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:44Z","lastTransitionTime":"2026-01-23T06:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.210595 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 13:46:56.441972524 +0000 UTC Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.213068 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.213123 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.213137 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.213161 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.213175 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:44Z","lastTransitionTime":"2026-01-23T06:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.316555 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.316625 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.316653 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.316696 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.316724 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:44Z","lastTransitionTime":"2026-01-23T06:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.419917 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.419970 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.419979 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.419995 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.420006 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:44Z","lastTransitionTime":"2026-01-23T06:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.523598 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.523639 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.523652 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.523672 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.523689 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:44Z","lastTransitionTime":"2026-01-23T06:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.626786 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.626827 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.626839 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.626856 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.626876 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:44Z","lastTransitionTime":"2026-01-23T06:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.687964 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovnkube-controller/1.log" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.690829 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerStarted","Data":"c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767"} Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.692094 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.715310 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.729495 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.729533 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.729546 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.729563 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.729574 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:44Z","lastTransitionTime":"2026-01-23T06:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.732176 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.750440 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.767863 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.784276 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.795665 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.809104 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.819120 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.830771 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.832659 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.832697 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.832709 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.832723 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.832733 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:44Z","lastTransitionTime":"2026-01-23T06:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.852149 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.865121 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.884649 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.897675 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c06486b-62a3-42b7-bf37-f6f34149b98e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d09ce02d12d083603fee9257fe237cd2303aeab94fe3b90607b26f0be5a65df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48e47f1bb90ab6043480997acbad39cf3f14c4c303a330d2e0ed9d277813999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bed583aa95c5bdf643c3feff6b814da7a093d0ff638e37ea75bd4ead2ed6625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.914247 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.927043 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.934736 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.934797 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.934808 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.934829 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.934840 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:44Z","lastTransitionTime":"2026-01-23T06:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.938850 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.958196 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:31Z\\\",\\\"message\\\":\\\"ow:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 06:20:30.903652 6208 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:44 crc kubenswrapper[4784]: I0123 06:20:44.966993 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.036982 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.037026 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.037037 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.037054 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.037067 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:45Z","lastTransitionTime":"2026-01-23T06:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.139042 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.139089 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.139101 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.139116 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.139127 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:45Z","lastTransitionTime":"2026-01-23T06:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.211038 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:57:35.617668372 +0000 UTC Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.241310 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.241362 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.241374 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.241393 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.241407 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:45Z","lastTransitionTime":"2026-01-23T06:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.255502 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:45 crc kubenswrapper[4784]: E0123 06:20:45.255636 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.256098 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:45 crc kubenswrapper[4784]: E0123 06:20:45.256164 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.256215 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:45 crc kubenswrapper[4784]: E0123 06:20:45.256264 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.256317 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:45 crc kubenswrapper[4784]: E0123 06:20:45.256377 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.344133 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.344172 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.344183 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.344198 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.344208 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:45Z","lastTransitionTime":"2026-01-23T06:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.447266 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.447316 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.447335 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.447359 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.447378 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:45Z","lastTransitionTime":"2026-01-23T06:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.550588 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.550665 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.550688 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.550720 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.550743 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:45Z","lastTransitionTime":"2026-01-23T06:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.654049 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.654157 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.654169 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.654187 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.654201 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:45Z","lastTransitionTime":"2026-01-23T06:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.697196 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovnkube-controller/2.log" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.698104 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovnkube-controller/1.log" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.702322 4784 generic.go:334] "Generic (PLEG): container finished" podID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerID="c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767" exitCode=1 Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.702387 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerDied","Data":"c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767"} Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.702437 4784 scope.go:117] "RemoveContainer" containerID="93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.703968 4784 scope.go:117] "RemoveContainer" containerID="c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767" Jan 23 06:20:45 crc kubenswrapper[4784]: E0123 06:20:45.705978 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.729179 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.748075 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.758006 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.758069 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.758088 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.758113 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.758131 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:45Z","lastTransitionTime":"2026-01-23T06:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.765678 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.782725 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.809869 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.824997 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.843134 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.860063 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.862286 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.862361 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.862402 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.862419 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.862429 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:45Z","lastTransitionTime":"2026-01-23T06:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.877574 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.898200 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.916320 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.935977 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.951025 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.964800 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.965005 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.965099 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.965205 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.965287 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:45Z","lastTransitionTime":"2026-01-23T06:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.981461 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f1f0aee08f55ab90b56f9e6dfd59a9781d7e446cd4e7b39cc4928a9e804a5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:31Z\\\",\\\"message\\\":\\\"ow:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 06:20:30.903652 6208 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:45Z\\\",\\\"message\\\":\\\"0 6399 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 06:20:45.318394 6399 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 06:20:45.318411 6399 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 06:20:45.318420 6399 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 06:20:45.318428 6399 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:45.318407 6399 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:20:45.318459 6399 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 06:20:45.319003 6399 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:20:45.319034 6399 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:20:45.319056 6399 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:45.319100 6399 factory.go:656] Stopping watch factory\\\\nI0123 06:20:45.319102 6399 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:45.319116 6399 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:20:45.319143 6399 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 06:20:45.319233 6399 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:45 crc kubenswrapper[4784]: I0123 06:20:45.996605 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.012457 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c06486b-62a3-42b7-bf37-f6f34149b98e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d09ce02d12d083603fee9257fe237cd2303aeab94fe3b90607b26f0be5a65df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48e47f1bb90ab6043480997acbad39cf3f14c4c303a330d2e0ed9d277813999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bed583aa95c5bdf643c3feff6b814da7a093d0ff638e37ea75bd4ead2ed6625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.027513 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.041340 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.068298 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.068402 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.068502 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.068969 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.068992 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:46Z","lastTransitionTime":"2026-01-23T06:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.172330 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.172387 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.172400 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.172421 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.172434 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:46Z","lastTransitionTime":"2026-01-23T06:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.211614 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:01:50.906821996 +0000 UTC Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.275313 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.275368 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.275379 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.275402 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.275417 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:46Z","lastTransitionTime":"2026-01-23T06:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.378354 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.378415 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.378435 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.378463 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.378485 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:46Z","lastTransitionTime":"2026-01-23T06:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.482376 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.482432 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.482451 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.482475 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.482493 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:46Z","lastTransitionTime":"2026-01-23T06:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.586377 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.586441 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.586460 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.586484 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.586503 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:46Z","lastTransitionTime":"2026-01-23T06:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.689635 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.689712 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.689737 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.689807 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.689831 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:46Z","lastTransitionTime":"2026-01-23T06:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.709254 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovnkube-controller/2.log" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.714973 4784 scope.go:117] "RemoveContainer" containerID="c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767" Jan 23 06:20:46 crc kubenswrapper[4784]: E0123 06:20:46.715276 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.729887 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.746000 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.768197 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.783734 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.792797 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.792857 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.792877 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.792904 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.792927 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:46Z","lastTransitionTime":"2026-01-23T06:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.805035 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.818171 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.862008 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.885185 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.895363 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.895412 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.895425 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.895441 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.895454 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:46Z","lastTransitionTime":"2026-01-23T06:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.900585 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.918491 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c06486b-62a3-42b7-bf37-f6f34149b98e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d09ce02d12d083603fee9257fe237cd2303aeab94fe3b90607b26f0be5a65df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48e47f1bb90ab6043480997acbad39cf3f14c4c303a330d2e0ed9d277813999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bed583aa95c5bdf643c3feff6b814da7a093d0ff638e37ea75bd4ead2ed6625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.933814 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.951560 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.965222 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.987064 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:45Z\\\",\\\"message\\\":\\\"0 6399 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 06:20:45.318394 6399 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 06:20:45.318411 6399 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 06:20:45.318420 6399 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 06:20:45.318428 6399 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:45.318407 6399 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:20:45.318459 6399 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 06:20:45.319003 6399 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:20:45.319034 6399 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:20:45.319056 6399 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:45.319100 6399 factory.go:656] Stopping watch factory\\\\nI0123 06:20:45.319102 6399 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:45.319116 6399 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:20:45.319143 6399 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 06:20:45.319233 6399 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.999164 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.999212 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.999224 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.999240 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:46 crc kubenswrapper[4784]: I0123 06:20:46.999253 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:46Z","lastTransitionTime":"2026-01-23T06:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.002429 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.017374 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.031341 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.046932 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.101820 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.102157 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.102285 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.102465 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.102585 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:47Z","lastTransitionTime":"2026-01-23T06:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.205540 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.205606 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.205641 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.205681 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.205705 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:47Z","lastTransitionTime":"2026-01-23T06:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.212185 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:28:41.959450328 +0000 UTC Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.253743 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.253883 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:47 crc kubenswrapper[4784]: E0123 06:20:47.253992 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.254012 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:47 crc kubenswrapper[4784]: E0123 06:20:47.254224 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:47 crc kubenswrapper[4784]: E0123 06:20:47.254393 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.254487 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:47 crc kubenswrapper[4784]: E0123 06:20:47.254617 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.277246 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.292653 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.308300 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.308364 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.308381 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.308406 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.308424 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:47Z","lastTransitionTime":"2026-01-23T06:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.311857 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.333816 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.350861 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.369293 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.389410 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.404712 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.410635 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.410673 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.410685 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.410703 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.410715 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:47Z","lastTransitionTime":"2026-01-23T06:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.423481 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.440098 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c06486b-62a3-42b7-bf37-f6f34149b98e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d09ce02d12d083603fee9257fe237cd2303aeab94fe3b90607b26f0be5a65df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48e47f1bb90ab6043480997acbad39cf3f14c4c303a330d2e0ed9d277813999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bed583aa95c5bdf643c3feff6b814da7a093d0ff638e37ea75bd4ead2ed6625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.462955 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.483907 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.499930 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.513496 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.513535 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.513550 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.513569 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.513580 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:47Z","lastTransitionTime":"2026-01-23T06:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.530882 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:45Z\\\",\\\"message\\\":\\\"0 6399 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 06:20:45.318394 6399 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 06:20:45.318411 6399 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 06:20:45.318420 6399 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 06:20:45.318428 6399 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:45.318407 6399 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:20:45.318459 6399 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 06:20:45.319003 6399 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:20:45.319034 6399 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:20:45.319056 6399 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:45.319100 6399 factory.go:656] Stopping watch factory\\\\nI0123 06:20:45.319102 6399 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:45.319116 6399 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:20:45.319143 6399 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 06:20:45.319233 6399 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.553428 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.568484 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.579513 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.593809 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.617857 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.617907 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.617919 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.617942 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.617956 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:47Z","lastTransitionTime":"2026-01-23T06:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.722005 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.722087 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.722107 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.722136 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.722158 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:47Z","lastTransitionTime":"2026-01-23T06:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.825433 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.825778 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.825844 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.825935 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.826009 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:47Z","lastTransitionTime":"2026-01-23T06:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.928478 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.928884 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.929135 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.929287 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:47 crc kubenswrapper[4784]: I0123 06:20:47.929409 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:47Z","lastTransitionTime":"2026-01-23T06:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.032697 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.032769 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.032780 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.032798 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.032809 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:48Z","lastTransitionTime":"2026-01-23T06:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.135725 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.135803 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.135825 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.135844 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.135856 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:48Z","lastTransitionTime":"2026-01-23T06:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.213002 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:01:31.941588327 +0000 UTC Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.238878 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.238957 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.238981 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.239006 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.239219 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:48Z","lastTransitionTime":"2026-01-23T06:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.342351 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.342428 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.342439 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.342461 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.342474 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:48Z","lastTransitionTime":"2026-01-23T06:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.446032 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.446507 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.446649 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.446820 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.446940 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:48Z","lastTransitionTime":"2026-01-23T06:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.550264 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.550326 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.550350 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.550390 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.550413 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:48Z","lastTransitionTime":"2026-01-23T06:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.654502 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.654582 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.654598 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.654627 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.654645 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:48Z","lastTransitionTime":"2026-01-23T06:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.758253 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.758302 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.758317 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.758341 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.758356 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:48Z","lastTransitionTime":"2026-01-23T06:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.808648 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs\") pod \"network-metrics-daemon-lcdgv\" (UID: \"cdf947ef-7279-4d43-854c-d836e0043e5b\") " pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:48 crc kubenswrapper[4784]: E0123 06:20:48.808814 4784 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:20:48 crc kubenswrapper[4784]: E0123 06:20:48.808886 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs podName:cdf947ef-7279-4d43-854c-d836e0043e5b nodeName:}" failed. No retries permitted until 2026-01-23 06:21:04.808868345 +0000 UTC m=+68.041376319 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs") pod "network-metrics-daemon-lcdgv" (UID: "cdf947ef-7279-4d43-854c-d836e0043e5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.862416 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.862492 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.862512 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.862547 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.862572 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:48Z","lastTransitionTime":"2026-01-23T06:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.965544 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.965589 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.965606 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.965661 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:48 crc kubenswrapper[4784]: I0123 06:20:48.965677 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:48Z","lastTransitionTime":"2026-01-23T06:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.068926 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.068989 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.069006 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.069029 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.069048 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:49Z","lastTransitionTime":"2026-01-23T06:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.111241 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.111506 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:21:21.11146154 +0000 UTC m=+84.343969554 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.172548 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.172641 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.172657 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.172679 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.172700 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:49Z","lastTransitionTime":"2026-01-23T06:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.212470 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.212538 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.212567 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.212604 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.212740 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.212788 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.212804 4784 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.212811 4784 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.212857 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:21:21.212841246 +0000 UTC m=+84.445349230 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.212881 4784 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.212925 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.212902 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:21:21.212880457 +0000 UTC m=+84.445388471 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.213004 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.213029 4784 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.213043 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:21:21.21301217 +0000 UTC m=+84.445520164 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.213139 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:21:21.213105752 +0000 UTC m=+84.445613916 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.213151 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:59:22.206359606 +0000 UTC Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.253492 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.253533 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.253729 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.253844 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.253934 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.254143 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.254302 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:49 crc kubenswrapper[4784]: E0123 06:20:49.254561 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.275826 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.275901 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.275927 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.275960 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.275987 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:49Z","lastTransitionTime":"2026-01-23T06:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.378403 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.378452 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.378470 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.378493 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.378509 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:49Z","lastTransitionTime":"2026-01-23T06:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.481855 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.482269 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.482526 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.482739 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.482959 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:49Z","lastTransitionTime":"2026-01-23T06:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.586711 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.586814 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.586831 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.586857 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.586872 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:49Z","lastTransitionTime":"2026-01-23T06:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.692681 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.692802 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.692820 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.692840 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.692860 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:49Z","lastTransitionTime":"2026-01-23T06:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.796456 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.796524 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.796538 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.796560 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.796574 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:49Z","lastTransitionTime":"2026-01-23T06:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.901218 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.901281 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.901292 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.901313 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:49 crc kubenswrapper[4784]: I0123 06:20:49.901324 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:49Z","lastTransitionTime":"2026-01-23T06:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.005020 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.005086 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.005104 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.005128 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.005145 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:50Z","lastTransitionTime":"2026-01-23T06:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.108637 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.109116 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.109267 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.109424 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.109645 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:50Z","lastTransitionTime":"2026-01-23T06:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.212847 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.212912 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.212934 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.212960 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.212977 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:50Z","lastTransitionTime":"2026-01-23T06:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.213316 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:22:25.113602176 +0000 UTC Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.317385 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.317480 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.317502 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.317536 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.317559 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:50Z","lastTransitionTime":"2026-01-23T06:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.420801 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.420879 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.420898 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.420929 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.420950 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:50Z","lastTransitionTime":"2026-01-23T06:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.524401 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.524457 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.524474 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.524495 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.524509 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:50Z","lastTransitionTime":"2026-01-23T06:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.627014 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.627055 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.627068 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.627084 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.627097 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:50Z","lastTransitionTime":"2026-01-23T06:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.730739 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.731157 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.731301 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.731418 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.731599 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:50Z","lastTransitionTime":"2026-01-23T06:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.834837 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.834919 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.834943 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.834971 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.834990 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:50Z","lastTransitionTime":"2026-01-23T06:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.938876 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.939526 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.939616 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.939826 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.939913 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:50Z","lastTransitionTime":"2026-01-23T06:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.941496 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.941527 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.941535 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.941550 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.941560 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:50Z","lastTransitionTime":"2026-01-23T06:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:50 crc kubenswrapper[4784]: E0123 06:20:50.955475 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.959315 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.959380 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.959403 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.959434 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.959455 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:50Z","lastTransitionTime":"2026-01-23T06:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:50 crc kubenswrapper[4784]: E0123 06:20:50.975521 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.980770 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.980825 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.980835 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.980852 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:50 crc kubenswrapper[4784]: I0123 06:20:50.980864 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:50Z","lastTransitionTime":"2026-01-23T06:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:50 crc kubenswrapper[4784]: E0123 06:20:50.996582 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.002513 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.002669 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.002775 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.002876 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.002963 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:51Z","lastTransitionTime":"2026-01-23T06:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:51 crc kubenswrapper[4784]: E0123 06:20:51.023105 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.028126 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.028178 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.028190 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.028212 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.028226 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:51Z","lastTransitionTime":"2026-01-23T06:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:51 crc kubenswrapper[4784]: E0123 06:20:51.048695 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:51 crc kubenswrapper[4784]: E0123 06:20:51.048905 4784 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.051132 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.051234 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.051321 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.051420 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.051480 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:51Z","lastTransitionTime":"2026-01-23T06:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.155105 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.155472 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.155558 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.155706 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.155850 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:51Z","lastTransitionTime":"2026-01-23T06:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.213468 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:15:10.43506948 +0000 UTC Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.252716 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.252863 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.252916 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.252794 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:51 crc kubenswrapper[4784]: E0123 06:20:51.253001 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:51 crc kubenswrapper[4784]: E0123 06:20:51.253127 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:51 crc kubenswrapper[4784]: E0123 06:20:51.253227 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:20:51 crc kubenswrapper[4784]: E0123 06:20:51.253298 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.265023 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.265099 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.265114 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.265135 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.265150 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:51Z","lastTransitionTime":"2026-01-23T06:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.368340 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.368398 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.368413 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.368437 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.368454 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:51Z","lastTransitionTime":"2026-01-23T06:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.471465 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.471509 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.471520 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.471534 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.471545 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:51Z","lastTransitionTime":"2026-01-23T06:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.574253 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.574319 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.574334 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.574353 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.574369 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:51Z","lastTransitionTime":"2026-01-23T06:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.676994 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.677094 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.677117 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.677151 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.677170 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:51Z","lastTransitionTime":"2026-01-23T06:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.779219 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.779263 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.779273 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.779289 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.779300 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:51Z","lastTransitionTime":"2026-01-23T06:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.882845 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.882896 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.882919 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.882961 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.882993 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:51Z","lastTransitionTime":"2026-01-23T06:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.986682 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.986738 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.986769 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.986786 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:51 crc kubenswrapper[4784]: I0123 06:20:51.986798 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:51Z","lastTransitionTime":"2026-01-23T06:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.089367 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.089428 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.089442 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.089462 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.089475 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:52Z","lastTransitionTime":"2026-01-23T06:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.192736 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.192806 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.192818 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.192837 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.192849 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:52Z","lastTransitionTime":"2026-01-23T06:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.215356 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:21:53.662166338 +0000 UTC Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.296361 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.296399 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.296411 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.296428 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.296443 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:52Z","lastTransitionTime":"2026-01-23T06:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.399406 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.399542 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.399564 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.399592 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.399611 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:52Z","lastTransitionTime":"2026-01-23T06:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.502284 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.502354 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.502369 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.502392 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.502407 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:52Z","lastTransitionTime":"2026-01-23T06:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.605330 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.605396 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.605414 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.605441 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.605464 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:52Z","lastTransitionTime":"2026-01-23T06:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.708889 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.708977 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.708995 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.709019 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.709036 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:52Z","lastTransitionTime":"2026-01-23T06:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.812228 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.812281 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.812290 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.812310 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.812321 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:52Z","lastTransitionTime":"2026-01-23T06:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.915121 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.915202 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.915214 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.915233 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:52 crc kubenswrapper[4784]: I0123 06:20:52.915246 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:52Z","lastTransitionTime":"2026-01-23T06:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.019559 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.019644 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.019657 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.019673 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.019704 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:53Z","lastTransitionTime":"2026-01-23T06:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.123368 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.123450 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.123474 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.123503 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.123521 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:53Z","lastTransitionTime":"2026-01-23T06:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.215956 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 08:49:25.316025427 +0000 UTC Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.226534 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.226607 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.226630 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.226656 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.226676 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:53Z","lastTransitionTime":"2026-01-23T06:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.253462 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.253538 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.253646 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:53 crc kubenswrapper[4784]: E0123 06:20:53.253650 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:20:53 crc kubenswrapper[4784]: E0123 06:20:53.253733 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.253818 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:53 crc kubenswrapper[4784]: E0123 06:20:53.253862 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:53 crc kubenswrapper[4784]: E0123 06:20:53.253908 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.329923 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.329977 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.329996 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.330023 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.330041 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:53Z","lastTransitionTime":"2026-01-23T06:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.433419 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.433515 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.433540 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.433574 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.433595 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:53Z","lastTransitionTime":"2026-01-23T06:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.536639 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.536727 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.536742 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.536787 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.536802 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:53Z","lastTransitionTime":"2026-01-23T06:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.640371 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.640427 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.640440 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.640462 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.640478 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:53Z","lastTransitionTime":"2026-01-23T06:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.747126 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.747193 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.747212 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.747235 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.747252 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:53Z","lastTransitionTime":"2026-01-23T06:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.850693 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.850801 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.850835 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.850867 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.850890 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:53Z","lastTransitionTime":"2026-01-23T06:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.953427 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.953469 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.953479 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.953494 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:53 crc kubenswrapper[4784]: I0123 06:20:53.953504 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:53Z","lastTransitionTime":"2026-01-23T06:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.056893 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.056959 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.056977 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.057002 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.057019 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:54Z","lastTransitionTime":"2026-01-23T06:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.159845 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.159905 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.159922 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.159963 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.159980 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:54Z","lastTransitionTime":"2026-01-23T06:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.216906 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:41:53.313007907 +0000 UTC Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.263349 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.263422 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.263446 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.263469 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.263487 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:54Z","lastTransitionTime":"2026-01-23T06:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.365540 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.365601 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.365615 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.365632 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.365645 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:54Z","lastTransitionTime":"2026-01-23T06:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.468710 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.468781 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.468793 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.468812 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.468824 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:54Z","lastTransitionTime":"2026-01-23T06:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.572542 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.572643 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.572661 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.572686 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.572706 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:54Z","lastTransitionTime":"2026-01-23T06:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.676092 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.676152 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.676173 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.676202 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.676223 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:54Z","lastTransitionTime":"2026-01-23T06:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.778791 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.778846 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.778860 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.778879 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.778893 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:54Z","lastTransitionTime":"2026-01-23T06:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.881735 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.881807 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.881823 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.881842 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.881856 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:54Z","lastTransitionTime":"2026-01-23T06:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.985219 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.985295 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.985314 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.985340 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:54 crc kubenswrapper[4784]: I0123 06:20:54.985359 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:54Z","lastTransitionTime":"2026-01-23T06:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.088458 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.088519 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.088544 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.088575 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.088598 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:55Z","lastTransitionTime":"2026-01-23T06:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.191495 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.191547 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.191558 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.191576 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.191588 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:55Z","lastTransitionTime":"2026-01-23T06:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.217077 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 18:29:17.99401475 +0000 UTC Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.252953 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.253025 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.253069 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.253073 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:55 crc kubenswrapper[4784]: E0123 06:20:55.253246 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:55 crc kubenswrapper[4784]: E0123 06:20:55.253379 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:20:55 crc kubenswrapper[4784]: E0123 06:20:55.253446 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:55 crc kubenswrapper[4784]: E0123 06:20:55.253691 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.294370 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.294438 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.294451 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.294468 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.294503 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:55Z","lastTransitionTime":"2026-01-23T06:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.396964 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.397061 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.397088 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.397118 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.397142 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:55Z","lastTransitionTime":"2026-01-23T06:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.500806 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.500876 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.500894 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.500919 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.500937 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:55Z","lastTransitionTime":"2026-01-23T06:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.603251 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.603322 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.603392 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.603420 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.603438 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:55Z","lastTransitionTime":"2026-01-23T06:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.706212 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.706272 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.706289 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.706314 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.706338 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:55Z","lastTransitionTime":"2026-01-23T06:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.810214 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.810297 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.810315 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.810338 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.810355 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:55Z","lastTransitionTime":"2026-01-23T06:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.913915 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.913970 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.913983 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.914003 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:55 crc kubenswrapper[4784]: I0123 06:20:55.914018 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:55Z","lastTransitionTime":"2026-01-23T06:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.017744 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.017823 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.017840 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.017863 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.017880 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:56Z","lastTransitionTime":"2026-01-23T06:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.120348 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.120402 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.120418 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.120439 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.120453 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:56Z","lastTransitionTime":"2026-01-23T06:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.217887 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 07:59:11.150463508 +0000 UTC Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.223312 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.223344 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.223357 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.223374 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.223386 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:56Z","lastTransitionTime":"2026-01-23T06:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.330824 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.330898 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.330914 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.330938 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.330980 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:56Z","lastTransitionTime":"2026-01-23T06:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.434200 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.434295 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.434316 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.434346 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.434366 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:56Z","lastTransitionTime":"2026-01-23T06:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.538226 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.538289 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.538301 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.538321 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.538336 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:56Z","lastTransitionTime":"2026-01-23T06:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.641889 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.641972 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.641994 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.642021 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.642045 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:56Z","lastTransitionTime":"2026-01-23T06:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.746001 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.746087 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.746105 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.746137 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.746161 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:56Z","lastTransitionTime":"2026-01-23T06:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.848994 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.849086 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.849116 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.849149 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.849174 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:56Z","lastTransitionTime":"2026-01-23T06:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.951455 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.951494 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.951505 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.951521 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:56 crc kubenswrapper[4784]: I0123 06:20:56.951532 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:56Z","lastTransitionTime":"2026-01-23T06:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.054595 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.054668 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.054701 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.054733 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.054792 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:57Z","lastTransitionTime":"2026-01-23T06:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.158084 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.158140 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.158154 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.158174 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.158187 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:57Z","lastTransitionTime":"2026-01-23T06:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.218997 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 23:53:06.961995323 +0000 UTC Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.253180 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.253269 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:57 crc kubenswrapper[4784]: E0123 06:20:57.253393 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.253432 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.253537 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:57 crc kubenswrapper[4784]: E0123 06:20:57.253562 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:57 crc kubenswrapper[4784]: E0123 06:20:57.253650 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:57 crc kubenswrapper[4784]: E0123 06:20:57.253812 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.260588 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.260647 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.260661 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.260679 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.260694 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:57Z","lastTransitionTime":"2026-01-23T06:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.274301 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.294384 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.311046 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.327557 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.342235 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.363435 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.365618 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.365713 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.365736 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.365808 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.365832 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:57Z","lastTransitionTime":"2026-01-23T06:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.381561 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.396558 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.413039 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.429734 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.458871 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.469231 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.469281 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.469293 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.469309 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.469320 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:57Z","lastTransitionTime":"2026-01-23T06:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.480249 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.494415 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.511471 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.538020 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:45Z\\\",\\\"message\\\":\\\"0 6399 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 06:20:45.318394 6399 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 06:20:45.318411 6399 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 06:20:45.318420 6399 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 06:20:45.318428 6399 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:45.318407 6399 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:20:45.318459 6399 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 06:20:45.319003 6399 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:20:45.319034 6399 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:20:45.319056 6399 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:45.319100 6399 factory.go:656] Stopping watch factory\\\\nI0123 06:20:45.319102 6399 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:45.319116 6399 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:20:45.319143 6399 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 06:20:45.319233 6399 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.552330 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.569339 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c06486b-62a3-42b7-bf37-f6f34149b98e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d09ce02d12d083603fee9257fe237cd2303aeab94fe3b90607b26f0be5a65df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48e47f1bb90ab6043480997acbad39cf3f14c4c303a330d2e0ed9d277813999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bed583aa95c5bdf643c3feff6b814da7a093d0ff638e37ea75bd4ead2ed6625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.572362 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.572417 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.572431 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.572743 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.572813 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:57Z","lastTransitionTime":"2026-01-23T06:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.590010 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:20:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.677660 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.677729 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.677742 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.677781 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.677794 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:57Z","lastTransitionTime":"2026-01-23T06:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.780653 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.780712 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.780728 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.780768 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.780784 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:57Z","lastTransitionTime":"2026-01-23T06:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.883955 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.884014 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.884025 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.884049 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.884064 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:57Z","lastTransitionTime":"2026-01-23T06:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.987230 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.987287 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.987305 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.987325 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:57 crc kubenswrapper[4784]: I0123 06:20:57.987340 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:57Z","lastTransitionTime":"2026-01-23T06:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.090217 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.090289 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.090307 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.090331 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.090349 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:58Z","lastTransitionTime":"2026-01-23T06:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.194233 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.194322 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.194335 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.194357 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.194374 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:58Z","lastTransitionTime":"2026-01-23T06:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.219780 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 15:30:36.537172115 +0000 UTC Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.298084 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.298142 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.298154 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.298173 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.298186 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:58Z","lastTransitionTime":"2026-01-23T06:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.400822 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.400874 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.400886 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.400906 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.400918 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:58Z","lastTransitionTime":"2026-01-23T06:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.504443 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.504484 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.504492 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.504509 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.504523 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:58Z","lastTransitionTime":"2026-01-23T06:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.608622 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.608666 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.608678 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.608701 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.608715 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:58Z","lastTransitionTime":"2026-01-23T06:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.711929 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.711981 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.711994 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.712011 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.712025 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:58Z","lastTransitionTime":"2026-01-23T06:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.815776 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.815836 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.815845 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.815863 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.815875 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:58Z","lastTransitionTime":"2026-01-23T06:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.919772 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.919830 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.919843 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.919863 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:58 crc kubenswrapper[4784]: I0123 06:20:58.919877 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:58Z","lastTransitionTime":"2026-01-23T06:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.022982 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.023045 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.023059 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.023081 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.023098 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:59Z","lastTransitionTime":"2026-01-23T06:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.126291 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.126339 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.126351 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.126371 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.126385 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:59Z","lastTransitionTime":"2026-01-23T06:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.220319 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 22:16:09.49038813 +0000 UTC Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.229892 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.229919 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.229930 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.229950 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.229966 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:59Z","lastTransitionTime":"2026-01-23T06:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.253518 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.253557 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.253624 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.253717 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:20:59 crc kubenswrapper[4784]: E0123 06:20:59.253705 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:20:59 crc kubenswrapper[4784]: E0123 06:20:59.253842 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:20:59 crc kubenswrapper[4784]: E0123 06:20:59.253932 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:20:59 crc kubenswrapper[4784]: E0123 06:20:59.254006 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.333543 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.333599 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.333610 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.333632 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.333649 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:59Z","lastTransitionTime":"2026-01-23T06:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.437274 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.437329 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.437343 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.437370 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.437387 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:59Z","lastTransitionTime":"2026-01-23T06:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.540375 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.540420 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.540430 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.540446 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.540457 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:59Z","lastTransitionTime":"2026-01-23T06:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.643179 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.643277 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.643288 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.643312 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.643324 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:59Z","lastTransitionTime":"2026-01-23T06:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.746088 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.746177 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.746211 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.746245 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.746264 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:59Z","lastTransitionTime":"2026-01-23T06:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.849678 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.849735 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.849779 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.849810 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.849824 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:59Z","lastTransitionTime":"2026-01-23T06:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.952803 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.952856 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.952869 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.952887 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:20:59 crc kubenswrapper[4784]: I0123 06:20:59.952901 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:20:59Z","lastTransitionTime":"2026-01-23T06:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.055365 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.055452 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.055469 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.055491 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.055507 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:00Z","lastTransitionTime":"2026-01-23T06:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.159883 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.159977 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.160001 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.160031 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.160052 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:00Z","lastTransitionTime":"2026-01-23T06:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.220827 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:01:06.162087939 +0000 UTC Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.255166 4784 scope.go:117] "RemoveContainer" containerID="c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767" Jan 23 06:21:00 crc kubenswrapper[4784]: E0123 06:21:00.255587 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.263608 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.263685 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.263700 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.263727 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.263747 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:00Z","lastTransitionTime":"2026-01-23T06:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.366845 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.366894 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.366904 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.366922 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.366935 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:00Z","lastTransitionTime":"2026-01-23T06:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.471662 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.471773 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.471789 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.471818 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.471881 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:00Z","lastTransitionTime":"2026-01-23T06:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.575438 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.575503 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.575514 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.575534 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.575547 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:00Z","lastTransitionTime":"2026-01-23T06:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.678631 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.678704 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.678727 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.678790 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.678816 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:00Z","lastTransitionTime":"2026-01-23T06:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.781624 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.781662 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.781672 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.781690 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.781704 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:00Z","lastTransitionTime":"2026-01-23T06:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.885136 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.885190 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.885207 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.885234 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.885252 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:00Z","lastTransitionTime":"2026-01-23T06:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.987559 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.987603 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.987614 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.987631 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:00 crc kubenswrapper[4784]: I0123 06:21:00.987644 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:00Z","lastTransitionTime":"2026-01-23T06:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.090646 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.090695 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.090708 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.090728 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.090742 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:01Z","lastTransitionTime":"2026-01-23T06:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.193449 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.193485 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.193495 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.193510 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.193521 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:01Z","lastTransitionTime":"2026-01-23T06:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.221746 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:10:07.173753679 +0000 UTC Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.253207 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.253207 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:01 crc kubenswrapper[4784]: E0123 06:21:01.253385 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.253403 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:01 crc kubenswrapper[4784]: E0123 06:21:01.253520 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.253586 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:01 crc kubenswrapper[4784]: E0123 06:21:01.253654 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:01 crc kubenswrapper[4784]: E0123 06:21:01.253725 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.297071 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.297123 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.297140 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.297161 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.297177 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:01Z","lastTransitionTime":"2026-01-23T06:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.353550 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.353601 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.353615 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.353640 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.353653 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:01Z","lastTransitionTime":"2026-01-23T06:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:01 crc kubenswrapper[4784]: E0123 06:21:01.373920 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:01Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.378716 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.378786 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.378798 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.378816 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.378829 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:01Z","lastTransitionTime":"2026-01-23T06:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:01 crc kubenswrapper[4784]: E0123 06:21:01.393218 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:01Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.397929 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.397961 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.397993 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.398013 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.398029 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:01Z","lastTransitionTime":"2026-01-23T06:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:01 crc kubenswrapper[4784]: E0123 06:21:01.412472 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:01Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.417681 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.417759 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.417770 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.417789 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.417818 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:01Z","lastTransitionTime":"2026-01-23T06:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:01 crc kubenswrapper[4784]: E0123 06:21:01.435043 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:01Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.440332 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.440373 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.440388 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.440406 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.440419 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:01Z","lastTransitionTime":"2026-01-23T06:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:01 crc kubenswrapper[4784]: E0123 06:21:01.457490 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:01Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:01 crc kubenswrapper[4784]: E0123 06:21:01.457639 4784 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.459884 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.459994 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.460068 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.460137 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.460228 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:01Z","lastTransitionTime":"2026-01-23T06:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.563423 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.563681 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.563879 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.564090 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.564317 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:01Z","lastTransitionTime":"2026-01-23T06:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.667207 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.667590 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.667731 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.667927 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.668077 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:01Z","lastTransitionTime":"2026-01-23T06:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.770651 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.771011 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.771137 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.771220 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.771286 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:01Z","lastTransitionTime":"2026-01-23T06:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.874767 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.874826 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.874839 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.874863 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.874878 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:01Z","lastTransitionTime":"2026-01-23T06:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.977789 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.977840 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.977852 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.977912 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:01 crc kubenswrapper[4784]: I0123 06:21:01.977925 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:01Z","lastTransitionTime":"2026-01-23T06:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.080766 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.080819 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.080836 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.080862 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.080875 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:02Z","lastTransitionTime":"2026-01-23T06:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.183768 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.183820 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.183831 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.183848 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.183862 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:02Z","lastTransitionTime":"2026-01-23T06:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.222343 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:59:23.756254433 +0000 UTC Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.287403 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.287451 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.287462 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.287481 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.287493 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:02Z","lastTransitionTime":"2026-01-23T06:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.390851 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.390907 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.390920 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.390942 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.390955 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:02Z","lastTransitionTime":"2026-01-23T06:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.494606 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.494678 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.494692 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.494715 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.494731 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:02Z","lastTransitionTime":"2026-01-23T06:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.597105 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.597147 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.597156 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.597175 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.597187 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:02Z","lastTransitionTime":"2026-01-23T06:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.700516 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.700559 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.700571 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.700590 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.700608 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:02Z","lastTransitionTime":"2026-01-23T06:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.803795 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.804126 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.804251 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.804365 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.804435 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:02Z","lastTransitionTime":"2026-01-23T06:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.907944 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.907986 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.907994 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.908011 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:02 crc kubenswrapper[4784]: I0123 06:21:02.908021 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:02Z","lastTransitionTime":"2026-01-23T06:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.011709 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.011766 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.011782 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.011800 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.011813 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:03Z","lastTransitionTime":"2026-01-23T06:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.115170 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.115644 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.115870 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.116077 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.116253 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:03Z","lastTransitionTime":"2026-01-23T06:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.219535 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.220023 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.220122 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.220208 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.220296 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:03Z","lastTransitionTime":"2026-01-23T06:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.222728 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 19:38:23.402672819 +0000 UTC Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.255080 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.255199 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:03 crc kubenswrapper[4784]: E0123 06:21:03.255293 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.255312 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.255080 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:03 crc kubenswrapper[4784]: E0123 06:21:03.255371 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:03 crc kubenswrapper[4784]: E0123 06:21:03.255434 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:03 crc kubenswrapper[4784]: E0123 06:21:03.255482 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.323122 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.323181 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.323193 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.323214 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.323228 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:03Z","lastTransitionTime":"2026-01-23T06:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.426485 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.426590 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.426604 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.426625 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.426645 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:03Z","lastTransitionTime":"2026-01-23T06:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.530654 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.530699 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.530710 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.530730 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.530743 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:03Z","lastTransitionTime":"2026-01-23T06:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.634101 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.634146 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.634158 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.634177 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.634188 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:03Z","lastTransitionTime":"2026-01-23T06:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.736779 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.736839 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.736850 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.736872 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.736886 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:03Z","lastTransitionTime":"2026-01-23T06:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.840066 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.840159 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.840181 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.840208 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.840227 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:03Z","lastTransitionTime":"2026-01-23T06:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.942598 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.942664 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.942677 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.942696 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:03 crc kubenswrapper[4784]: I0123 06:21:03.942707 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:03Z","lastTransitionTime":"2026-01-23T06:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.046622 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.046682 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.046692 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.046712 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.046732 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:04Z","lastTransitionTime":"2026-01-23T06:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.150341 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.150422 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.150439 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.150461 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.150480 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:04Z","lastTransitionTime":"2026-01-23T06:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.223523 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:55:51.434989456 +0000 UTC Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.253899 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.253973 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.253993 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.254016 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.254028 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:04Z","lastTransitionTime":"2026-01-23T06:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.357233 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.357305 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.357319 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.357340 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.357356 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:04Z","lastTransitionTime":"2026-01-23T06:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.460676 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.461039 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.461052 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.461073 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.461085 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:04Z","lastTransitionTime":"2026-01-23T06:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.564582 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.564642 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.564658 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.564683 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.564699 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:04Z","lastTransitionTime":"2026-01-23T06:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.667881 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.667955 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.667966 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.667985 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.668001 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:04Z","lastTransitionTime":"2026-01-23T06:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.771437 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.771489 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.771507 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.771533 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.771548 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:04Z","lastTransitionTime":"2026-01-23T06:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.848220 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs\") pod \"network-metrics-daemon-lcdgv\" (UID: \"cdf947ef-7279-4d43-854c-d836e0043e5b\") " pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:04 crc kubenswrapper[4784]: E0123 06:21:04.848456 4784 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:21:04 crc kubenswrapper[4784]: E0123 06:21:04.848575 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs podName:cdf947ef-7279-4d43-854c-d836e0043e5b nodeName:}" failed. No retries permitted until 2026-01-23 06:21:36.848545247 +0000 UTC m=+100.081053221 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs") pod "network-metrics-daemon-lcdgv" (UID: "cdf947ef-7279-4d43-854c-d836e0043e5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.874962 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.875029 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.875046 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.875070 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.875089 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:04Z","lastTransitionTime":"2026-01-23T06:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.978818 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.978885 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.978896 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.978915 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:04 crc kubenswrapper[4784]: I0123 06:21:04.978928 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:04Z","lastTransitionTime":"2026-01-23T06:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.083072 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.083126 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.083137 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.083154 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.083165 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:05Z","lastTransitionTime":"2026-01-23T06:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.187635 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.187697 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.187711 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.187733 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.187762 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:05Z","lastTransitionTime":"2026-01-23T06:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.224539 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:44:46.650974175 +0000 UTC Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.253279 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.253340 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.253277 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.253277 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:05 crc kubenswrapper[4784]: E0123 06:21:05.253538 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:05 crc kubenswrapper[4784]: E0123 06:21:05.253679 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:05 crc kubenswrapper[4784]: E0123 06:21:05.253792 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:05 crc kubenswrapper[4784]: E0123 06:21:05.253932 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.290893 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.290954 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.290962 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.290976 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.290987 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:05Z","lastTransitionTime":"2026-01-23T06:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.394152 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.394267 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.394287 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.394313 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.394330 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:05Z","lastTransitionTime":"2026-01-23T06:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.497358 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.497397 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.497406 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.497421 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.497432 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:05Z","lastTransitionTime":"2026-01-23T06:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.601082 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.601132 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.601142 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.601165 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.601177 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:05Z","lastTransitionTime":"2026-01-23T06:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.705194 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.705353 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.705384 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.705428 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.705475 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:05Z","lastTransitionTime":"2026-01-23T06:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.808464 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.808519 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.808532 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.808555 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.808569 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:05Z","lastTransitionTime":"2026-01-23T06:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.912158 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.912217 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.912230 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.912248 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:05 crc kubenswrapper[4784]: I0123 06:21:05.912262 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:05Z","lastTransitionTime":"2026-01-23T06:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.015160 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.015220 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.015237 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.015263 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.015279 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:06Z","lastTransitionTime":"2026-01-23T06:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.119693 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.119790 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.119807 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.119843 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.119868 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:06Z","lastTransitionTime":"2026-01-23T06:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.223770 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.223837 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.223851 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.223876 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.223893 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:06Z","lastTransitionTime":"2026-01-23T06:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.224865 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:18:47.35663077 +0000 UTC Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.326933 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.326988 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.327000 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.327023 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.327039 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:06Z","lastTransitionTime":"2026-01-23T06:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.429814 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.429868 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.429879 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.429898 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.429912 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:06Z","lastTransitionTime":"2026-01-23T06:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.533002 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.533053 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.533067 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.533088 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.533102 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:06Z","lastTransitionTime":"2026-01-23T06:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.636034 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.636123 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.636137 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.636160 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.636182 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:06Z","lastTransitionTime":"2026-01-23T06:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.739578 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.739640 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.739653 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.739673 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.739687 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:06Z","lastTransitionTime":"2026-01-23T06:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.843230 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.843322 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.843347 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.843380 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.843404 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:06Z","lastTransitionTime":"2026-01-23T06:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.946192 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.946248 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.946265 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.946287 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:06 crc kubenswrapper[4784]: I0123 06:21:06.946303 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:06Z","lastTransitionTime":"2026-01-23T06:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.049216 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.049258 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.049270 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.049286 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.049303 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:07Z","lastTransitionTime":"2026-01-23T06:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.152441 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.152509 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.152520 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.152537 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.152548 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:07Z","lastTransitionTime":"2026-01-23T06:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.225742 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 05:58:54.692875544 +0000 UTC Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.253356 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.253458 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.253520 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.253393 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:07 crc kubenswrapper[4784]: E0123 06:21:07.254072 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:07 crc kubenswrapper[4784]: E0123 06:21:07.254599 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:07 crc kubenswrapper[4784]: E0123 06:21:07.255050 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:07 crc kubenswrapper[4784]: E0123 06:21:07.255280 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.256540 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.256625 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.256643 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.256667 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.256689 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:07Z","lastTransitionTime":"2026-01-23T06:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.282026 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.296851 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.314930 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.328557 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c06486b-62a3-42b7-bf37-f6f34149b98e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d09ce02d12d083603fee9257fe237cd2303aeab94fe3b90607b26f0be5a65df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48e47f1bb90ab6043480997acbad39cf3f14c4c303a330d2e0ed9d277813999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bed583aa95c5bdf643c3feff6b814da7a093d0ff638e37ea75bd4ead2ed6625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.345983 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.359432 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.359474 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.359484 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.359498 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.359508 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:07Z","lastTransitionTime":"2026-01-23T06:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.363464 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.377101 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.396831 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:45Z\\\",\\\"message\\\":\\\"0 6399 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 06:20:45.318394 6399 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 06:20:45.318411 6399 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 06:20:45.318420 6399 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 06:20:45.318428 6399 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:45.318407 6399 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:20:45.318459 6399 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 06:20:45.319003 6399 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:20:45.319034 6399 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:20:45.319056 6399 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:45.319100 6399 factory.go:656] Stopping watch factory\\\\nI0123 06:20:45.319102 6399 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:45.319116 6399 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:20:45.319143 6399 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 06:20:45.319233 6399 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.409155 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.424691 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.438311 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.456907 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.462247 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.462290 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.462301 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.462322 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.462333 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:07Z","lastTransitionTime":"2026-01-23T06:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.476107 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.492096 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.507042 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.522392 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.535512 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.547127 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.564184 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.564224 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.564236 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.564252 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.564263 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:07Z","lastTransitionTime":"2026-01-23T06:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.666798 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.666881 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.666904 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.666987 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.667009 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:07Z","lastTransitionTime":"2026-01-23T06:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.770810 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.770911 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.770926 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.770947 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.770961 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:07Z","lastTransitionTime":"2026-01-23T06:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.793629 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8cjm4_76b58650-2600-48a5-b11e-2ed4503cc6b2/kube-multus/0.log" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.793721 4784 generic.go:334] "Generic (PLEG): container finished" podID="76b58650-2600-48a5-b11e-2ed4503cc6b2" containerID="373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953" exitCode=1 Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.793806 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8cjm4" event={"ID":"76b58650-2600-48a5-b11e-2ed4503cc6b2","Type":"ContainerDied","Data":"373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953"} Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.794504 4784 scope.go:117] "RemoveContainer" containerID="373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.819307 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.839711 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.861011 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:21:07Z\\\",\\\"message\\\":\\\"2026-01-23T06:20:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe\\\\n2026-01-23T06:20:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe to /host/opt/cni/bin/\\\\n2026-01-23T06:20:22Z [verbose] multus-daemon started\\\\n2026-01-23T06:20:22Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:21:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.873874 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.873935 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.873952 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.873976 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.873992 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:07Z","lastTransitionTime":"2026-01-23T06:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.874555 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.890185 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c06486b-62a3-42b7-bf37-f6f34149b98e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d09ce02d12d083603fee9257fe237cd2303aeab94fe3b90607b26f0be5a65df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48e47f1bb90ab6043480997acbad39cf3f14c4c303a330d2e0ed9d277813999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bed583aa95c5bdf643c3feff6b814da7a093d0ff638e37ea75bd4ead2ed6625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.906390 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.924924 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.939813 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.961196 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:45Z\\\",\\\"message\\\":\\\"0 6399 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 06:20:45.318394 6399 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 06:20:45.318411 6399 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 06:20:45.318420 6399 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 06:20:45.318428 6399 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:45.318407 6399 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:20:45.318459 6399 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 06:20:45.319003 6399 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:20:45.319034 6399 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:20:45.319056 6399 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:45.319100 6399 factory.go:656] Stopping watch factory\\\\nI0123 06:20:45.319102 6399 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:45.319116 6399 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:20:45.319143 6399 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 06:20:45.319233 6399 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.975500 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.976569 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.976742 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.976918 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.977033 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.977123 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:07Z","lastTransitionTime":"2026-01-23T06:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:07 crc kubenswrapper[4784]: I0123 06:21:07.989674 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:07Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.005562 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.018644 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.034330 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.049899 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.064183 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.076860 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.079583 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.079809 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.079901 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.079972 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.080031 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:08Z","lastTransitionTime":"2026-01-23T06:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.099628 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.182326 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.182356 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.182363 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.182376 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.182385 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:08Z","lastTransitionTime":"2026-01-23T06:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.226423 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 03:32:57.445888319 +0000 UTC Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.285530 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.285569 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.285579 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.285597 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.285609 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:08Z","lastTransitionTime":"2026-01-23T06:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.393324 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.393400 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.393411 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.393434 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.393446 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:08Z","lastTransitionTime":"2026-01-23T06:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.496965 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.497262 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.497345 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.497431 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.497527 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:08Z","lastTransitionTime":"2026-01-23T06:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.600407 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.600865 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.601070 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.601261 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.601410 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:08Z","lastTransitionTime":"2026-01-23T06:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.704823 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.704896 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.704915 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.704946 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.704967 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:08Z","lastTransitionTime":"2026-01-23T06:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.800380 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8cjm4_76b58650-2600-48a5-b11e-2ed4503cc6b2/kube-multus/0.log" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.800457 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8cjm4" event={"ID":"76b58650-2600-48a5-b11e-2ed4503cc6b2","Type":"ContainerStarted","Data":"5069fe1f444d91f332095ee10707394c5ba532193fa0b03068a85ec8c6c80916"} Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.807599 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.807644 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.807653 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.807678 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.807687 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:08Z","lastTransitionTime":"2026-01-23T06:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.820337 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.834558 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.849343 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.863542 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.879399 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.894355 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.918636 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.918736 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.918789 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.918824 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.918848 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:08Z","lastTransitionTime":"2026-01-23T06:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.924683 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.940913 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.959705 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5069fe1f444d91f332095ee10707394c5ba532193fa0b03068a85ec8c6c80916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:21:07Z\\\",\\\"message\\\":\\\"2026-01-23T06:20:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe\\\\n2026-01-23T06:20:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe to /host/opt/cni/bin/\\\\n2026-01-23T06:20:22Z [verbose] multus-daemon started\\\\n2026-01-23T06:20:22Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:21:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:21:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.980678 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:45Z\\\",\\\"message\\\":\\\"0 6399 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 06:20:45.318394 6399 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 06:20:45.318411 6399 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 06:20:45.318420 6399 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 06:20:45.318428 6399 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:45.318407 6399 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:20:45.318459 6399 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 06:20:45.319003 6399 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:20:45.319034 6399 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:20:45.319056 6399 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:45.319100 6399 factory.go:656] Stopping watch factory\\\\nI0123 06:20:45.319102 6399 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:45.319116 6399 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:20:45.319143 6399 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 06:20:45.319233 6399 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:08 crc kubenswrapper[4784]: I0123 06:21:08.995566 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:08Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.007027 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c06486b-62a3-42b7-bf37-f6f34149b98e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d09ce02d12d083603fee9257fe237cd2303aeab94fe3b90607b26f0be5a65df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48e47f1bb90ab6043480997acbad39cf3f14c4c303a330d2e0ed9d277813999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bed583aa95c5bdf643c3feff6b814da7a093d0ff638e37ea75bd4ead2ed6625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:09Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.019491 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:09Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.022440 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.022519 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.022534 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.022564 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.022585 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:09Z","lastTransitionTime":"2026-01-23T06:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.034270 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:09Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.046614 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:09Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.063672 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:09Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.079419 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:09Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.097512 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:09Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.126362 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.126408 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.126420 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.126444 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.126456 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:09Z","lastTransitionTime":"2026-01-23T06:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.227526 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:47:06.307017242 +0000 UTC Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.230272 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.230320 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.230338 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.230361 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.230375 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:09Z","lastTransitionTime":"2026-01-23T06:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.253921 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.253955 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.253996 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.253923 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:09 crc kubenswrapper[4784]: E0123 06:21:09.254120 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:09 crc kubenswrapper[4784]: E0123 06:21:09.254242 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:09 crc kubenswrapper[4784]: E0123 06:21:09.254372 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:09 crc kubenswrapper[4784]: E0123 06:21:09.254641 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.333573 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.333643 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.333660 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.333684 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.333698 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:09Z","lastTransitionTime":"2026-01-23T06:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.437491 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.437561 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.437573 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.437596 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.437609 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:09Z","lastTransitionTime":"2026-01-23T06:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.540681 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.540724 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.540733 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.540774 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.540789 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:09Z","lastTransitionTime":"2026-01-23T06:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.643945 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.644013 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.644031 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.644054 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.644072 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:09Z","lastTransitionTime":"2026-01-23T06:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.747012 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.747081 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.747093 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.747115 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.747127 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:09Z","lastTransitionTime":"2026-01-23T06:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.850334 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.850390 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.850401 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.850419 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.850432 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:09Z","lastTransitionTime":"2026-01-23T06:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.955145 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.955243 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.955254 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.955279 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:09 crc kubenswrapper[4784]: I0123 06:21:09.955291 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:09Z","lastTransitionTime":"2026-01-23T06:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.057979 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.058031 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.058044 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.058067 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.058081 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:10Z","lastTransitionTime":"2026-01-23T06:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.161863 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.161921 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.161934 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.161958 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.161971 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:10Z","lastTransitionTime":"2026-01-23T06:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.228239 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:03:18.85199939 +0000 UTC Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.265031 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.265097 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.265116 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.265138 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.265152 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:10Z","lastTransitionTime":"2026-01-23T06:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.369112 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.369163 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.369177 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.369194 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.369205 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:10Z","lastTransitionTime":"2026-01-23T06:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.478084 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.478134 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.478144 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.478164 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.478177 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:10Z","lastTransitionTime":"2026-01-23T06:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.581769 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.581864 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.581880 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.581902 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.581923 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:10Z","lastTransitionTime":"2026-01-23T06:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.685251 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.685343 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.685358 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.685380 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.685411 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:10Z","lastTransitionTime":"2026-01-23T06:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.788298 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.788363 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.788378 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.788401 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.788413 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:10Z","lastTransitionTime":"2026-01-23T06:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.893188 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.893273 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.893290 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.893312 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.893353 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:10Z","lastTransitionTime":"2026-01-23T06:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.996496 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.996564 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.996575 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.996592 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:10 crc kubenswrapper[4784]: I0123 06:21:10.996602 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:10Z","lastTransitionTime":"2026-01-23T06:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.100665 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.100731 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.100741 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.100798 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.100811 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:11Z","lastTransitionTime":"2026-01-23T06:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.204596 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.204636 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.204650 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.204672 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.204688 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:11Z","lastTransitionTime":"2026-01-23T06:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.229092 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:23:13.125120181 +0000 UTC Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.253633 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.253687 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.253633 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:11 crc kubenswrapper[4784]: E0123 06:21:11.253853 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:11 crc kubenswrapper[4784]: E0123 06:21:11.253927 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:11 crc kubenswrapper[4784]: E0123 06:21:11.253997 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.254046 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:11 crc kubenswrapper[4784]: E0123 06:21:11.254337 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.307396 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.307467 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.307483 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.307507 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.307524 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:11Z","lastTransitionTime":"2026-01-23T06:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.410660 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.410699 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.410709 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.410726 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.410738 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:11Z","lastTransitionTime":"2026-01-23T06:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.515035 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.515099 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.515137 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.515158 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.515172 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:11Z","lastTransitionTime":"2026-01-23T06:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.618890 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.618958 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.618972 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.618999 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.619019 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:11Z","lastTransitionTime":"2026-01-23T06:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.675699 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.675761 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.675771 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.675789 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.675801 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:11Z","lastTransitionTime":"2026-01-23T06:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:11 crc kubenswrapper[4784]: E0123 06:21:11.689014 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:11Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.694518 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.694587 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.694601 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.694641 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.694669 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:11Z","lastTransitionTime":"2026-01-23T06:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:11 crc kubenswrapper[4784]: E0123 06:21:11.711206 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:11Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.717575 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.717652 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.717668 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.717744 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.718002 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:11Z","lastTransitionTime":"2026-01-23T06:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:11 crc kubenswrapper[4784]: E0123 06:21:11.735379 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:11Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.741824 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.741896 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.741913 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.741937 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.741952 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:11Z","lastTransitionTime":"2026-01-23T06:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:11 crc kubenswrapper[4784]: E0123 06:21:11.757114 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:11Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.762147 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.762204 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.762215 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.762236 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.762607 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:11Z","lastTransitionTime":"2026-01-23T06:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:11 crc kubenswrapper[4784]: E0123 06:21:11.778045 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:11Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:11 crc kubenswrapper[4784]: E0123 06:21:11.778219 4784 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.780453 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.780476 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.780484 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.780502 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.780512 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:11Z","lastTransitionTime":"2026-01-23T06:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.884463 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.884514 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.884527 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.884549 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.884565 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:11Z","lastTransitionTime":"2026-01-23T06:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.988890 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.988950 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.988962 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.988982 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:11 crc kubenswrapper[4784]: I0123 06:21:11.988994 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:11Z","lastTransitionTime":"2026-01-23T06:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.091334 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.091387 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.091409 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.091427 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.091438 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:12Z","lastTransitionTime":"2026-01-23T06:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.194649 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.194717 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.194734 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.194813 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.194844 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:12Z","lastTransitionTime":"2026-01-23T06:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.229637 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 15:00:38.508884793 +0000 UTC Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.299522 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.299567 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.299580 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.299598 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.299612 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:12Z","lastTransitionTime":"2026-01-23T06:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.402276 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.402341 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.402356 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.402378 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.402395 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:12Z","lastTransitionTime":"2026-01-23T06:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.505504 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.505553 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.505565 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.505586 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.505599 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:12Z","lastTransitionTime":"2026-01-23T06:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.609489 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.609542 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.609553 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.609571 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.609587 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:12Z","lastTransitionTime":"2026-01-23T06:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.712844 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.712893 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.712906 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.712924 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.712936 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:12Z","lastTransitionTime":"2026-01-23T06:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.815893 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.815946 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.815963 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.815989 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.816009 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:12Z","lastTransitionTime":"2026-01-23T06:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.918896 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.918961 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.918976 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.918995 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:12 crc kubenswrapper[4784]: I0123 06:21:12.919008 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:12Z","lastTransitionTime":"2026-01-23T06:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.022372 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.022431 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.022443 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.022464 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.022477 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:13Z","lastTransitionTime":"2026-01-23T06:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.124928 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.124969 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.124979 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.124995 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.125005 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:13Z","lastTransitionTime":"2026-01-23T06:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.228410 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.228485 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.228503 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.228532 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.228553 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:13Z","lastTransitionTime":"2026-01-23T06:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.230319 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:24:46.249987706 +0000 UTC Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.253329 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:13 crc kubenswrapper[4784]: E0123 06:21:13.253536 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.253828 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:13 crc kubenswrapper[4784]: E0123 06:21:13.253906 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.254064 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:13 crc kubenswrapper[4784]: E0123 06:21:13.254125 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.254384 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:13 crc kubenswrapper[4784]: E0123 06:21:13.254461 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.333649 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.333700 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.333710 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.333727 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.333739 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:13Z","lastTransitionTime":"2026-01-23T06:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.436415 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.436474 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.436486 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.436508 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.436522 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:13Z","lastTransitionTime":"2026-01-23T06:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.540466 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.540520 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.540533 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.540557 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.540572 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:13Z","lastTransitionTime":"2026-01-23T06:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.643852 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.643928 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.643944 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.643971 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.643989 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:13Z","lastTransitionTime":"2026-01-23T06:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.746287 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.746333 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.746347 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.746364 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.746375 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:13Z","lastTransitionTime":"2026-01-23T06:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.848206 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.848249 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.848259 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.848275 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.848287 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:13Z","lastTransitionTime":"2026-01-23T06:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.951265 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.951325 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.951338 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.951358 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:13 crc kubenswrapper[4784]: I0123 06:21:13.951373 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:13Z","lastTransitionTime":"2026-01-23T06:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.054693 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.054795 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.054811 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.054832 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.054846 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:14Z","lastTransitionTime":"2026-01-23T06:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.157603 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.157662 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.157674 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.157695 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.157708 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:14Z","lastTransitionTime":"2026-01-23T06:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.231118 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:51:16.503105263 +0000 UTC Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.261211 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.261278 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.261290 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.261312 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.261327 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:14Z","lastTransitionTime":"2026-01-23T06:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.365334 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.365387 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.365415 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.365441 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.365458 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:14Z","lastTransitionTime":"2026-01-23T06:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.469711 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.469796 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.469812 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.469835 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.469849 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:14Z","lastTransitionTime":"2026-01-23T06:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.574118 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.574173 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.574187 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.574208 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.574224 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:14Z","lastTransitionTime":"2026-01-23T06:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.677968 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.678050 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.678069 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.678101 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.678125 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:14Z","lastTransitionTime":"2026-01-23T06:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.781515 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.781592 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.781616 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.781648 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.781673 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:14Z","lastTransitionTime":"2026-01-23T06:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.884332 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.884381 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.884395 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.884416 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.884428 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:14Z","lastTransitionTime":"2026-01-23T06:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.988564 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.988641 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.988667 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.988704 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:14 crc kubenswrapper[4784]: I0123 06:21:14.988734 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:14Z","lastTransitionTime":"2026-01-23T06:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.091853 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.091920 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.091942 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.091980 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.092007 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:15Z","lastTransitionTime":"2026-01-23T06:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.195361 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.195451 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.195476 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.195540 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.195569 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:15Z","lastTransitionTime":"2026-01-23T06:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.232046 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 15:44:02.299236407 +0000 UTC Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.252900 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.253200 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.253253 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:15 crc kubenswrapper[4784]: E0123 06:21:15.253598 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:15 crc kubenswrapper[4784]: E0123 06:21:15.253686 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:15 crc kubenswrapper[4784]: E0123 06:21:15.253921 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.254012 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:15 crc kubenswrapper[4784]: E0123 06:21:15.254300 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.254599 4784 scope.go:117] "RemoveContainer" containerID="c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.298736 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.298814 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.298827 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.298848 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.298861 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:15Z","lastTransitionTime":"2026-01-23T06:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.401437 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.401479 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.401490 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.401509 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.401522 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:15Z","lastTransitionTime":"2026-01-23T06:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.506269 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.506328 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.506338 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.506357 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.506372 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:15Z","lastTransitionTime":"2026-01-23T06:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.621368 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.621431 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.621441 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.621477 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.621491 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:15Z","lastTransitionTime":"2026-01-23T06:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.733296 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.733415 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.733451 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.733488 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.733512 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:15Z","lastTransitionTime":"2026-01-23T06:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.834223 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovnkube-controller/2.log" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.836052 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.836128 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.836145 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.836175 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.836193 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:15Z","lastTransitionTime":"2026-01-23T06:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.836909 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerStarted","Data":"960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f"} Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.837885 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.852695 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c06486b-62a3-42b7-bf37-f6f34149b98e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d09ce02d12d083603fee9257fe237cd2303aeab94fe3b90607b26f0be5a65df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48e47f1bb90ab6043480997acbad39cf3f14c4c303a330d2e0ed9d277813999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bed583aa95c5bdf643c3feff6b814da7a093d0ff638e37ea75bd4ead2ed6625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:15Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.870837 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:15Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.891141 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:15Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.903290 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:15Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.925079 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:45Z\\\",\\\"message\\\":\\\"0 6399 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 06:20:45.318394 6399 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 06:20:45.318411 6399 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 06:20:45.318420 6399 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 06:20:45.318428 6399 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:45.318407 6399 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:20:45.318459 6399 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 06:20:45.319003 6399 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:20:45.319034 6399 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:20:45.319056 6399 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:45.319100 6399 factory.go:656] Stopping watch factory\\\\nI0123 06:20:45.319102 6399 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:45.319116 6399 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:20:45.319143 6399 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 06:20:45.319233 6399 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:15Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.936436 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:15Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.938681 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.938717 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.938730 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.938771 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.938784 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:15Z","lastTransitionTime":"2026-01-23T06:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.950538 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:15Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.963083 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:15Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.977941 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:15Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:15 crc kubenswrapper[4784]: I0123 06:21:15.991548 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:15Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.012326 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:16Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.025944 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:16Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.038291 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:16Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.046052 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.046095 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.046107 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.046129 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.046140 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:16Z","lastTransitionTime":"2026-01-23T06:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.061776 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:16Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.073877 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:16Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.150653 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.150710 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.150724 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.150768 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.150788 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:16Z","lastTransitionTime":"2026-01-23T06:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.163047 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:16Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.180303 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:16Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.197947 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5069fe1f444d91f332095ee10707394c5ba532193fa0b03068a85ec8c6c80916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:21:07Z\\\",\\\"message\\\":\\\"2026-01-23T06:20:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe\\\\n2026-01-23T06:20:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe to /host/opt/cni/bin/\\\\n2026-01-23T06:20:22Z [verbose] multus-daemon started\\\\n2026-01-23T06:20:22Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:21:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:21:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:16Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.233221 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:44:35.503424035 +0000 UTC Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.253365 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.253434 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.253455 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.253477 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.253489 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:16Z","lastTransitionTime":"2026-01-23T06:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.356321 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.356364 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.356375 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.356394 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.356407 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:16Z","lastTransitionTime":"2026-01-23T06:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.458971 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.459029 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.459040 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.459058 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.459069 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:16Z","lastTransitionTime":"2026-01-23T06:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.561990 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.562026 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.562035 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.562051 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.562062 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:16Z","lastTransitionTime":"2026-01-23T06:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.664784 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.664849 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.664859 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.664876 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.664886 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:16Z","lastTransitionTime":"2026-01-23T06:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.768010 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.768073 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.768109 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.768149 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.768187 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:16Z","lastTransitionTime":"2026-01-23T06:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.871209 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.871270 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.871289 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.871314 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.871330 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:16Z","lastTransitionTime":"2026-01-23T06:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.974819 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.974879 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.974895 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.974914 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:16 crc kubenswrapper[4784]: I0123 06:21:16.974927 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:16Z","lastTransitionTime":"2026-01-23T06:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.078528 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.078591 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.078605 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.078629 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.078648 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:17Z","lastTransitionTime":"2026-01-23T06:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.181723 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.181777 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.181789 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.181806 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.181816 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:17Z","lastTransitionTime":"2026-01-23T06:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.233656 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:19:16.990130738 +0000 UTC Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.253345 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.253345 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.253523 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:17 crc kubenswrapper[4784]: E0123 06:21:17.253705 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.253783 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:17 crc kubenswrapper[4784]: E0123 06:21:17.253934 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:17 crc kubenswrapper[4784]: E0123 06:21:17.254068 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:17 crc kubenswrapper[4784]: E0123 06:21:17.254299 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.268481 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.276088 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5069fe1f444d91f332095ee10707394c5ba532193fa0b03068a85ec8c6c80916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:21:07Z\\\",\\\"message\\\":\\\"2026-01-23T06:20:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe\\\\n2026-01-23T06:20:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe to /host/opt/cni/bin/\\\\n2026-01-23T06:20:22Z [verbose] multus-daemon started\\\\n2026-01-23T06:20:22Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:21:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:21:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.284853 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.284928 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.284955 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.284994 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.285021 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:17Z","lastTransitionTime":"2026-01-23T06:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.298882 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.315279 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.334735 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.347058 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.372831 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:45Z\\\",\\\"message\\\":\\\"0 6399 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 06:20:45.318394 6399 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 06:20:45.318411 6399 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 06:20:45.318420 6399 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 06:20:45.318428 6399 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:45.318407 6399 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:20:45.318459 6399 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 06:20:45.319003 6399 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:20:45.319034 6399 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:20:45.319056 6399 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:45.319100 6399 factory.go:656] Stopping watch factory\\\\nI0123 06:20:45.319102 6399 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:45.319116 6399 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:20:45.319143 6399 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 06:20:45.319233 6399 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.387729 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.388209 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.388264 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.388293 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.388323 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.388345 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:17Z","lastTransitionTime":"2026-01-23T06:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.401078 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c06486b-62a3-42b7-bf37-f6f34149b98e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d09ce02d12d083603fee9257fe237cd2303aeab94fe3b90607b26f0be5a65df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48e47f1bb90ab6043480997acbad39cf3f14c4c303a330d2e0ed9d277813999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bed583aa95c5bdf643c3feff6b814da7a093d0ff638e37ea75bd4ead2ed6625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.413705 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.426815 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.445998 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.462994 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.477202 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.490685 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.491371 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.491423 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.491438 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.491461 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.491478 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:17Z","lastTransitionTime":"2026-01-23T06:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.510328 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.529794 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.550270 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.572825 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.594405 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.594454 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.594464 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.594482 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.594495 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:17Z","lastTransitionTime":"2026-01-23T06:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.697656 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.697715 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.697733 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.697775 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.697796 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:17Z","lastTransitionTime":"2026-01-23T06:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.801354 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.801445 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.801470 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.801506 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.801533 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:17Z","lastTransitionTime":"2026-01-23T06:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.848377 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovnkube-controller/3.log" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.849685 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovnkube-controller/2.log" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.854257 4784 generic.go:334] "Generic (PLEG): container finished" podID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerID="960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f" exitCode=1 Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.854392 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerDied","Data":"960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f"} Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.854509 4784 scope.go:117] "RemoveContainer" containerID="c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.856044 4784 scope.go:117] "RemoveContainer" containerID="960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f" Jan 23 06:21:17 crc kubenswrapper[4784]: E0123 06:21:17.856355 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.896419 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.909050 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.909108 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.909124 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.909147 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.909163 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:17Z","lastTransitionTime":"2026-01-23T06:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.916219 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.933972 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5069fe1f444d91f332095ee10707394c5ba532193fa0b03068a85ec8c6c80916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:21:07Z\\\",\\\"message\\\":\\\"2026-01-23T06:20:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe\\\\n2026-01-23T06:20:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe to /host/opt/cni/bin/\\\\n2026-01-23T06:20:22Z [verbose] multus-daemon started\\\\n2026-01-23T06:20:22Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:21:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:21:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.945429 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.967243 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:45Z\\\",\\\"message\\\":\\\"0 6399 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 06:20:45.318394 6399 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 06:20:45.318411 6399 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 06:20:45.318420 6399 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 06:20:45.318428 6399 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:45.318407 6399 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:20:45.318459 6399 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 06:20:45.319003 6399 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:20:45.319034 6399 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:20:45.319056 6399 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:45.319100 6399 factory.go:656] Stopping watch factory\\\\nI0123 06:20:45.319102 6399 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:45.319116 6399 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:20:45.319143 6399 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 06:20:45.319233 6399 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:21:16Z\\\",\\\"message\\\":\\\"rver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-network-node-identity/network-node-identity-vrzqb openshift-ovn-kubernetes/ovnkube-node-9652h openshift-machine-config-operator/machine-config-daemon-r7dpd]\\\\nI0123 06:21:16.528475 6791 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 06:21:16.528496 6791 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0123 06:21:16.528536 6791 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0123 06:21:16.528554 6791 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.982105 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:17 crc kubenswrapper[4784]: I0123 06:21:17.993723 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c06486b-62a3-42b7-bf37-f6f34149b98e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d09ce02d12d083603fee9257fe237cd2303aeab94fe3b90607b26f0be5a65df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48e47f1bb90ab6043480997acbad39cf3f14c4c303a330d2e0ed9d277813999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bed583aa95c5bdf643c3feff6b814da7a093d0ff638e37ea75bd4ead2ed6625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:17Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.007645 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.011620 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.011656 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.011666 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.011685 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.011697 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:18Z","lastTransitionTime":"2026-01-23T06:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.019396 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.036273 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.051920 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.067718 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.087326 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.104458 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.114929 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.114990 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.115005 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.115026 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.115039 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:18Z","lastTransitionTime":"2026-01-23T06:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.120001 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.133345 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.148998 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.163440 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76311b61-3fe6-478e-8ab1-7a9227351764\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d8d335a55d46d0af562baebd8a838e5306dc05b5307fc63cf8857eace36ff28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0c501824c5bc62076c5e2118342ad5dcddb15c3d7d34cd5905eab840aef4d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0c501824c5bc62076c5e2118342ad5dcddb15c3d7d34cd5905eab840aef4d52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.187619 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:18Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.217937 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.217988 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.217996 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.218014 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.218026 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:18Z","lastTransitionTime":"2026-01-23T06:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.234447 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 21:31:47.375211272 +0000 UTC Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.321345 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.321406 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.321419 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.321443 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.321459 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:18Z","lastTransitionTime":"2026-01-23T06:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.425360 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.425423 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.425436 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.425458 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.425471 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:18Z","lastTransitionTime":"2026-01-23T06:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.529721 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.529842 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.529866 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.529896 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.529916 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:18Z","lastTransitionTime":"2026-01-23T06:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.633817 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.633940 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.633961 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.633995 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.634211 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:18Z","lastTransitionTime":"2026-01-23T06:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.737991 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.738062 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.738075 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.738097 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.738113 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:18Z","lastTransitionTime":"2026-01-23T06:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.842184 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.842269 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.842292 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.842319 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.842337 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:18Z","lastTransitionTime":"2026-01-23T06:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.860708 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovnkube-controller/3.log" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.945952 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.946001 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.946014 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.946035 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:18 crc kubenswrapper[4784]: I0123 06:21:18.946048 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:18Z","lastTransitionTime":"2026-01-23T06:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.050274 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.050357 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.050393 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.050440 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.050468 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:19Z","lastTransitionTime":"2026-01-23T06:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.155147 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.155251 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.155278 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.155311 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.155335 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:19Z","lastTransitionTime":"2026-01-23T06:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.235308 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 07:23:18.964311147 +0000 UTC Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.253841 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.253944 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.254018 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.254178 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:19 crc kubenswrapper[4784]: E0123 06:21:19.254160 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:19 crc kubenswrapper[4784]: E0123 06:21:19.254584 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:19 crc kubenswrapper[4784]: E0123 06:21:19.254717 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:19 crc kubenswrapper[4784]: E0123 06:21:19.254864 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.258353 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.258401 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.258419 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.258441 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.258454 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:19Z","lastTransitionTime":"2026-01-23T06:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.360567 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.360639 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.360653 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.360675 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.360688 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:19Z","lastTransitionTime":"2026-01-23T06:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.463495 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.463559 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.463571 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.463594 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.463609 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:19Z","lastTransitionTime":"2026-01-23T06:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.568028 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.568126 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.568148 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.568178 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.568199 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:19Z","lastTransitionTime":"2026-01-23T06:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.672268 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.672347 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.672366 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.672396 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.672416 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:19Z","lastTransitionTime":"2026-01-23T06:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.776351 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.776437 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.776463 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.776496 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.776520 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:19Z","lastTransitionTime":"2026-01-23T06:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.880040 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.880116 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.880138 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.880164 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.880185 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:19Z","lastTransitionTime":"2026-01-23T06:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.983559 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.983624 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.983642 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.983671 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:19 crc kubenswrapper[4784]: I0123 06:21:19.983692 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:19Z","lastTransitionTime":"2026-01-23T06:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.086182 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.086240 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.086258 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.086282 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.086346 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:20Z","lastTransitionTime":"2026-01-23T06:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.189248 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.189311 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.189323 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.189339 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.189352 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:20Z","lastTransitionTime":"2026-01-23T06:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.236427 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:07:38.029999029 +0000 UTC Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.292600 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.292668 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.292684 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.292708 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.292725 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:20Z","lastTransitionTime":"2026-01-23T06:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.396102 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.396147 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.396155 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.396170 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.396180 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:20Z","lastTransitionTime":"2026-01-23T06:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.499373 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.499436 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.499449 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.499474 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.499488 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:20Z","lastTransitionTime":"2026-01-23T06:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.603227 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.603299 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.603318 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.603348 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.603372 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:20Z","lastTransitionTime":"2026-01-23T06:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.706472 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.706527 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.706544 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.706579 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.706595 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:20Z","lastTransitionTime":"2026-01-23T06:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.809701 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.809818 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.809844 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.809872 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.809892 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:20Z","lastTransitionTime":"2026-01-23T06:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.912406 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.912447 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.912457 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.912475 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:20 crc kubenswrapper[4784]: I0123 06:21:20.912486 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:20Z","lastTransitionTime":"2026-01-23T06:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.015205 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.015270 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.015283 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.015308 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.015709 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:21Z","lastTransitionTime":"2026-01-23T06:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.118931 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.118981 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.118994 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.119013 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.119025 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:21Z","lastTransitionTime":"2026-01-23T06:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.205725 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.205971 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:25.205949725 +0000 UTC m=+148.438457699 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.222157 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.222236 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.222251 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.222272 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.222288 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:21Z","lastTransitionTime":"2026-01-23T06:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.237701 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:11:06.908330466 +0000 UTC Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.253394 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.253603 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.253618 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.253739 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.253922 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.254043 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.254286 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.254348 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.307051 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.307127 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.307185 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.307233 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.307328 4784 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.307384 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.307408 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.307427 4784 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.307440 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:22:25.307409106 +0000 UTC m=+148.539917120 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.307471 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:22:25.307457367 +0000 UTC m=+148.539965341 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.307330 4784 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.307508 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:22:25.307499328 +0000 UTC m=+148.540007302 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.307542 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.307589 4784 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.307617 4784 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:21:21 crc kubenswrapper[4784]: E0123 06:21:21.307724 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:22:25.307697312 +0000 UTC m=+148.540205326 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.325724 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.325772 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.325782 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.325798 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.325810 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:21Z","lastTransitionTime":"2026-01-23T06:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.429220 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.429303 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.429327 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.429363 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.429390 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:21Z","lastTransitionTime":"2026-01-23T06:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.533261 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.533323 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.533332 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.533351 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.533362 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:21Z","lastTransitionTime":"2026-01-23T06:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.637666 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.637718 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.637730 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.637778 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.637792 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:21Z","lastTransitionTime":"2026-01-23T06:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.742101 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.742156 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.742166 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.742192 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.742206 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:21Z","lastTransitionTime":"2026-01-23T06:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.844775 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.844821 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.844837 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.844859 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.844871 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:21Z","lastTransitionTime":"2026-01-23T06:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.947264 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.947314 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.947323 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.947340 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:21 crc kubenswrapper[4784]: I0123 06:21:21.947348 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:21Z","lastTransitionTime":"2026-01-23T06:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.049708 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.049810 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.049832 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.049855 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.049874 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:22Z","lastTransitionTime":"2026-01-23T06:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.107806 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.107870 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.107894 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.107922 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.107944 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:22Z","lastTransitionTime":"2026-01-23T06:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:22 crc kubenswrapper[4784]: E0123 06:21:22.124168 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.129628 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.129668 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.129679 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.129695 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.129708 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:22Z","lastTransitionTime":"2026-01-23T06:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:22 crc kubenswrapper[4784]: E0123 06:21:22.148937 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.153405 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.153439 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.153492 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.153508 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.153520 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:22Z","lastTransitionTime":"2026-01-23T06:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:22 crc kubenswrapper[4784]: E0123 06:21:22.166572 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.170705 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.170795 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.170815 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.170842 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.170871 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:22Z","lastTransitionTime":"2026-01-23T06:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:22 crc kubenswrapper[4784]: E0123 06:21:22.188301 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.192795 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.192827 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.192836 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.192852 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.192861 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:22Z","lastTransitionTime":"2026-01-23T06:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:22 crc kubenswrapper[4784]: E0123 06:21:22.207997 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:22Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:22 crc kubenswrapper[4784]: E0123 06:21:22.208131 4784 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.209630 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.209677 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.209694 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.209716 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.209733 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:22Z","lastTransitionTime":"2026-01-23T06:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.238632 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:37:41.773616279 +0000 UTC Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.311623 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.311651 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.311660 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.311676 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.311685 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:22Z","lastTransitionTime":"2026-01-23T06:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.413774 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.413833 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.413850 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.413870 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.413883 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:22Z","lastTransitionTime":"2026-01-23T06:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.524804 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.524888 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.524911 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.524944 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.524970 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:22Z","lastTransitionTime":"2026-01-23T06:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.629056 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.629098 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.629108 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.629124 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.629138 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:22Z","lastTransitionTime":"2026-01-23T06:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.731407 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.731451 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.731464 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.731481 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.731493 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:22Z","lastTransitionTime":"2026-01-23T06:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.834054 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.834124 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.834142 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.834165 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.834182 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:22Z","lastTransitionTime":"2026-01-23T06:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.937214 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.937289 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.937309 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.937334 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:22 crc kubenswrapper[4784]: I0123 06:21:22.937354 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:22Z","lastTransitionTime":"2026-01-23T06:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.040703 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.040825 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.040846 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.040875 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.040896 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:23Z","lastTransitionTime":"2026-01-23T06:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.144039 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.144088 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.144100 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.144117 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.144129 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:23Z","lastTransitionTime":"2026-01-23T06:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.239619 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 16:16:34.57751768 +0000 UTC Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.247119 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.247182 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.247239 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.247266 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.247285 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:23Z","lastTransitionTime":"2026-01-23T06:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.253502 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.253535 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:23 crc kubenswrapper[4784]: E0123 06:21:23.253608 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.253630 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.253723 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:23 crc kubenswrapper[4784]: E0123 06:21:23.253801 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:23 crc kubenswrapper[4784]: E0123 06:21:23.253802 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:23 crc kubenswrapper[4784]: E0123 06:21:23.253853 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.350279 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.350346 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.350365 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.350392 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.350411 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:23Z","lastTransitionTime":"2026-01-23T06:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.453937 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.454020 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.454038 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.454064 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.454097 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:23Z","lastTransitionTime":"2026-01-23T06:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.557428 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.557488 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.557510 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.557540 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.557560 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:23Z","lastTransitionTime":"2026-01-23T06:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.660936 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.661015 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.661041 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.661077 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.661103 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:23Z","lastTransitionTime":"2026-01-23T06:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.764669 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.764733 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.764782 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.764812 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.764829 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:23Z","lastTransitionTime":"2026-01-23T06:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.867866 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.867908 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.867920 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.867937 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.867950 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:23Z","lastTransitionTime":"2026-01-23T06:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.971515 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.971567 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.971584 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.971608 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:23 crc kubenswrapper[4784]: I0123 06:21:23.971626 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:23Z","lastTransitionTime":"2026-01-23T06:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.074714 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.074812 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.074834 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.074859 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.074877 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:24Z","lastTransitionTime":"2026-01-23T06:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.178331 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.178626 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.178642 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.178668 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.178684 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:24Z","lastTransitionTime":"2026-01-23T06:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.239953 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 07:49:07.263033276 +0000 UTC Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.281686 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.281785 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.281804 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.281830 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.281851 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:24Z","lastTransitionTime":"2026-01-23T06:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.385916 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.385965 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.385977 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.385997 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.386008 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:24Z","lastTransitionTime":"2026-01-23T06:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.489639 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.489712 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.489734 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.489797 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.489833 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:24Z","lastTransitionTime":"2026-01-23T06:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.593162 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.593230 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.593436 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.593474 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.593496 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:24Z","lastTransitionTime":"2026-01-23T06:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.696682 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.696734 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.696769 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.696789 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.696801 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:24Z","lastTransitionTime":"2026-01-23T06:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.800018 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.800074 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.800092 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.800119 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.800137 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:24Z","lastTransitionTime":"2026-01-23T06:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.903421 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.903475 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.903489 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.903506 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:24 crc kubenswrapper[4784]: I0123 06:21:24.903516 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:24Z","lastTransitionTime":"2026-01-23T06:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.007008 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.007074 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.007097 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.007129 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.007152 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:25Z","lastTransitionTime":"2026-01-23T06:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.110208 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.110260 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.110271 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.110293 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.110308 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:25Z","lastTransitionTime":"2026-01-23T06:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.212792 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.212837 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.212849 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.212866 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.212879 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:25Z","lastTransitionTime":"2026-01-23T06:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.240666 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:33:30.906246775 +0000 UTC Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.253345 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.253435 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:25 crc kubenswrapper[4784]: E0123 06:21:25.253515 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.253371 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.253594 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:25 crc kubenswrapper[4784]: E0123 06:21:25.253632 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:25 crc kubenswrapper[4784]: E0123 06:21:25.253774 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:25 crc kubenswrapper[4784]: E0123 06:21:25.253884 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.315397 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.315463 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.315479 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.315505 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.315525 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:25Z","lastTransitionTime":"2026-01-23T06:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.418605 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.418651 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.418661 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.418675 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.418686 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:25Z","lastTransitionTime":"2026-01-23T06:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.521700 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.521771 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.521784 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.521804 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.521816 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:25Z","lastTransitionTime":"2026-01-23T06:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.624949 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.625018 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.625036 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.625069 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.625096 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:25Z","lastTransitionTime":"2026-01-23T06:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.728650 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.728706 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.728718 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.728739 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.728773 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:25Z","lastTransitionTime":"2026-01-23T06:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.832178 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.832291 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.832332 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.832407 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.832433 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:25Z","lastTransitionTime":"2026-01-23T06:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.935631 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.935694 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.935708 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.935726 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:25 crc kubenswrapper[4784]: I0123 06:21:25.935740 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:25Z","lastTransitionTime":"2026-01-23T06:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.039693 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.039828 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.039851 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.039883 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.039921 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:26Z","lastTransitionTime":"2026-01-23T06:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.142954 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.143026 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.143038 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.143054 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.143066 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:26Z","lastTransitionTime":"2026-01-23T06:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.241460 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:47:10.986199792 +0000 UTC Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.246667 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.246828 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.246848 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.246878 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.246909 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:26Z","lastTransitionTime":"2026-01-23T06:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.350477 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.350536 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.350547 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.350571 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.350590 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:26Z","lastTransitionTime":"2026-01-23T06:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.454813 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.454876 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.454892 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.454915 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.454936 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:26Z","lastTransitionTime":"2026-01-23T06:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.557475 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.557521 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.557534 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.557550 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.557562 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:26Z","lastTransitionTime":"2026-01-23T06:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.660741 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.660871 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.660897 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.660928 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.660951 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:26Z","lastTransitionTime":"2026-01-23T06:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.765044 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.765103 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.765118 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.765137 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.765151 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:26Z","lastTransitionTime":"2026-01-23T06:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.869778 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.869844 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.869859 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.869885 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.869901 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:26Z","lastTransitionTime":"2026-01-23T06:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.973865 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.973963 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.974000 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.974036 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:26 crc kubenswrapper[4784]: I0123 06:21:26.974059 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:26Z","lastTransitionTime":"2026-01-23T06:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.077414 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.077480 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.077530 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.077557 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.077579 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:27Z","lastTransitionTime":"2026-01-23T06:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.180069 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.180133 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.180145 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.180170 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.180185 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:27Z","lastTransitionTime":"2026-01-23T06:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.242596 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:22:44.702922234 +0000 UTC Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.253106 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.253206 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.253218 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:27 crc kubenswrapper[4784]: E0123 06:21:27.253332 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:27 crc kubenswrapper[4784]: E0123 06:21:27.253462 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.253547 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:27 crc kubenswrapper[4784]: E0123 06:21:27.253642 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:27 crc kubenswrapper[4784]: E0123 06:21:27.253727 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.273507 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.284447 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.284494 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.284507 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.284529 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.284545 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:27Z","lastTransitionTime":"2026-01-23T06:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.288112 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.303321 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.323312 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.337606 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76311b61-3fe6-478e-8ab1-7a9227351764\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d8d335a55d46d0af562baebd8a838e5306dc05b5307fc63cf8857eace36ff28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0c501824c5bc62076c5e2118342ad5dcddb15c3d7d34cd5905eab840aef4d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0c501824c5bc62076c5e2118342ad5dcddb15c3d7d34cd5905eab840aef4d52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.356537 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.373566 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.387422 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.387458 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.387477 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.387495 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.387504 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:27Z","lastTransitionTime":"2026-01-23T06:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.407653 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.426657 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.442938 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5069fe1f444d91f332095ee10707394c5ba532193fa0b03068a85ec8c6c80916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:21:07Z\\\",\\\"message\\\":\\\"2026-01-23T06:20:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe\\\\n2026-01-23T06:20:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe to /host/opt/cni/bin/\\\\n2026-01-23T06:20:22Z [verbose] multus-daemon started\\\\n2026-01-23T06:20:22Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:21:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:21:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.470371 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6c13cd654c5bf17d4c2f82b1457e32f8918a723d15d512b22fd2f6211e2a767\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:20:45Z\\\",\\\"message\\\":\\\"0 6399 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 06:20:45.318394 6399 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 06:20:45.318411 6399 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 06:20:45.318420 6399 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 06:20:45.318428 6399 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:20:45.318427 6399 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:20:45.318407 6399 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:20:45.318459 6399 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 06:20:45.319003 6399 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:20:45.319034 6399 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:20:45.319056 6399 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:20:45.319100 6399 factory.go:656] Stopping watch factory\\\\nI0123 06:20:45.319102 6399 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:20:45.319116 6399 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:20:45.319143 6399 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 06:20:45.319233 6399 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:21:16Z\\\",\\\"message\\\":\\\"rver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-network-node-identity/network-node-identity-vrzqb openshift-ovn-kubernetes/ovnkube-node-9652h openshift-machine-config-operator/machine-config-daemon-r7dpd]\\\\nI0123 06:21:16.528475 6791 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 06:21:16.528496 6791 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0123 06:21:16.528536 6791 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0123 06:21:16.528554 6791 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.485045 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.491802 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.491865 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.491881 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.491902 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.491917 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:27Z","lastTransitionTime":"2026-01-23T06:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.508472 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c06486b-62a3-42b7-bf37-f6f34149b98e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d09ce02d12d083603fee9257fe237cd2303aeab94fe3b90607b26f0be5a65df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48e47f1bb90ab6043480997acbad39cf3f14c4c303a330d2e0ed9d277813999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bed583aa95c5bdf643c3feff6b814da7a093d0ff638e37ea75bd4ead2ed6625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.527175 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.544929 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.558705 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.574979 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.590088 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.594983 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.595076 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.595104 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.595141 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.595170 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:27Z","lastTransitionTime":"2026-01-23T06:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.612462 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:27Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.698495 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.698574 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.698591 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.698619 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.698639 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:27Z","lastTransitionTime":"2026-01-23T06:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.802954 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.803418 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.803504 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.803585 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.803660 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:27Z","lastTransitionTime":"2026-01-23T06:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.907291 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.907355 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.907367 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.907388 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:27 crc kubenswrapper[4784]: I0123 06:21:27.907401 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:27Z","lastTransitionTime":"2026-01-23T06:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.011256 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.011659 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.011853 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.012024 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.012204 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:28Z","lastTransitionTime":"2026-01-23T06:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.115245 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.115577 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.115661 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.115735 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.115833 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:28Z","lastTransitionTime":"2026-01-23T06:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.219437 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.219508 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.219531 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.219561 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.219640 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:28Z","lastTransitionTime":"2026-01-23T06:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.242968 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 23:52:32.204236392 +0000 UTC Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.322455 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.322515 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.322534 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.322556 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.322573 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:28Z","lastTransitionTime":"2026-01-23T06:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.426363 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.426414 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.426431 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.426453 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.426467 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:28Z","lastTransitionTime":"2026-01-23T06:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.529825 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.529868 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.529877 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.529894 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.529907 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:28Z","lastTransitionTime":"2026-01-23T06:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.633379 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.633426 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.633438 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.633457 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.633469 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:28Z","lastTransitionTime":"2026-01-23T06:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.736892 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.736946 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.736962 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.736981 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.736996 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:28Z","lastTransitionTime":"2026-01-23T06:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.840496 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.840555 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.840570 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.840590 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.840603 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:28Z","lastTransitionTime":"2026-01-23T06:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.943869 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.943911 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.943920 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.943938 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:28 crc kubenswrapper[4784]: I0123 06:21:28.943948 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:28Z","lastTransitionTime":"2026-01-23T06:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.046955 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.047013 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.047023 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.047041 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.047053 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:29Z","lastTransitionTime":"2026-01-23T06:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.149921 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.149974 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.149986 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.150005 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.150019 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:29Z","lastTransitionTime":"2026-01-23T06:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.244105 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:54:10.785893435 +0000 UTC Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.252706 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.252716 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.252782 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.252796 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.252815 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.252815 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.252846 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.252895 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.252829 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:29Z","lastTransitionTime":"2026-01-23T06:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:29 crc kubenswrapper[4784]: E0123 06:21:29.252850 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:29 crc kubenswrapper[4784]: E0123 06:21:29.253025 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:29 crc kubenswrapper[4784]: E0123 06:21:29.253068 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:29 crc kubenswrapper[4784]: E0123 06:21:29.253099 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.355333 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.355369 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.355379 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.355399 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.355411 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:29Z","lastTransitionTime":"2026-01-23T06:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.458412 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.458477 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.458496 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.458522 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.458538 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:29Z","lastTransitionTime":"2026-01-23T06:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.561304 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.561379 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.561396 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.561417 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.561432 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:29Z","lastTransitionTime":"2026-01-23T06:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.664594 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.664676 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.664700 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.664868 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.664898 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:29Z","lastTransitionTime":"2026-01-23T06:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.768792 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.768847 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.768859 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.768878 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.768893 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:29Z","lastTransitionTime":"2026-01-23T06:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.871780 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.871838 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.871849 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.871869 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.871882 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:29Z","lastTransitionTime":"2026-01-23T06:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.974533 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.974595 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.974608 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.974627 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:29 crc kubenswrapper[4784]: I0123 06:21:29.974640 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:29Z","lastTransitionTime":"2026-01-23T06:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.077816 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.077896 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.077910 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.077935 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.077961 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:30Z","lastTransitionTime":"2026-01-23T06:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.181294 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.181358 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.181369 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.181390 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.181405 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:30Z","lastTransitionTime":"2026-01-23T06:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.245250 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 02:29:10.336513444 +0000 UTC Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.283640 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.283703 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.283714 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.283733 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.283746 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:30Z","lastTransitionTime":"2026-01-23T06:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.386876 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.386925 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.386935 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.386951 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.386962 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:30Z","lastTransitionTime":"2026-01-23T06:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.489503 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.489575 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.489598 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.489627 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.489649 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:30Z","lastTransitionTime":"2026-01-23T06:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.592962 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.593035 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.593056 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.593081 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.593095 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:30Z","lastTransitionTime":"2026-01-23T06:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.696723 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.696826 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.696844 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.696868 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.696887 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:30Z","lastTransitionTime":"2026-01-23T06:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.799607 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.799674 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.799685 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.799701 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.799710 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:30Z","lastTransitionTime":"2026-01-23T06:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.902485 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.902540 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.902557 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.902581 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:30 crc kubenswrapper[4784]: I0123 06:21:30.902597 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:30Z","lastTransitionTime":"2026-01-23T06:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.004799 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.004844 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.004853 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.004867 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.004878 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:31Z","lastTransitionTime":"2026-01-23T06:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.107341 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.107397 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.107412 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.107428 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.107439 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:31Z","lastTransitionTime":"2026-01-23T06:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.210139 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.210185 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.210200 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.210219 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.210236 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:31Z","lastTransitionTime":"2026-01-23T06:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.246251 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:12:19.071121092 +0000 UTC Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.253593 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:31 crc kubenswrapper[4784]: E0123 06:21:31.253723 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.253828 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.253889 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.253941 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:31 crc kubenswrapper[4784]: E0123 06:21:31.254015 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:31 crc kubenswrapper[4784]: E0123 06:21:31.254119 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:31 crc kubenswrapper[4784]: E0123 06:21:31.254283 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.313853 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.313890 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.313900 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.313916 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.313926 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:31Z","lastTransitionTime":"2026-01-23T06:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.417904 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.417974 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.417984 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.418002 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.418013 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:31Z","lastTransitionTime":"2026-01-23T06:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.521884 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.521931 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.521940 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.521957 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.521969 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:31Z","lastTransitionTime":"2026-01-23T06:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.626176 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.626248 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.626264 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.626292 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.626317 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:31Z","lastTransitionTime":"2026-01-23T06:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.729173 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.729215 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.729224 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.729241 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.729255 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:31Z","lastTransitionTime":"2026-01-23T06:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.832029 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.832087 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.832105 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.832127 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.832138 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:31Z","lastTransitionTime":"2026-01-23T06:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.934249 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.934296 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.934308 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.934325 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:31 crc kubenswrapper[4784]: I0123 06:21:31.934336 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:31Z","lastTransitionTime":"2026-01-23T06:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.037331 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.037382 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.037396 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.037416 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.037431 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:32Z","lastTransitionTime":"2026-01-23T06:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.140712 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.140785 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.140797 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.140814 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.140831 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:32Z","lastTransitionTime":"2026-01-23T06:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.244292 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.244406 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.244426 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.244452 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.244472 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:32Z","lastTransitionTime":"2026-01-23T06:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.246687 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 15:51:39.827629055 +0000 UTC Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.247256 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.247297 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.247314 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.247341 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.247360 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:32Z","lastTransitionTime":"2026-01-23T06:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.254000 4784 scope.go:117] "RemoveContainer" containerID="960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f" Jan 23 06:21:32 crc kubenswrapper[4784]: E0123 06:21:32.254275 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" Jan 23 06:21:32 crc kubenswrapper[4784]: E0123 06:21:32.266195 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.275071 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.276442 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.276510 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.276533 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.276562 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.276586 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:32Z","lastTransitionTime":"2026-01-23T06:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.290535 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: E0123 06:21:32.292725 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.298934 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.299018 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.299042 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.299074 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.299095 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:32Z","lastTransitionTime":"2026-01-23T06:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.307156 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: E0123 06:21:32.313838 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.318221 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.318252 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.318262 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.318280 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.318291 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:32Z","lastTransitionTime":"2026-01-23T06:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.324640 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: E0123 06:21:32.331687 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.335797 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.335831 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.335841 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.335857 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.335871 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:32Z","lastTransitionTime":"2026-01-23T06:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.339266 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.352725 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76311b61-3fe6-478e-8ab1-7a9227351764\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d8d335a55d46d0af562baebd8a838e5306dc05b5307fc63cf8857eace36ff28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0c501824c5bc62076c5e2118342ad5dcddb15c3d7d34cd5905eab840aef4d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0c501824c5bc62076c5e2118342ad5dcddb15c3d7d34cd5905eab840aef4d52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: E0123 06:21:32.353009 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6bf1eead-6d5f-443a-9fe0-75bfca2eafd3\\\",\\\"systemUUID\\\":\\\"0719c803-6211-4272-a78a-6e99726b5e37\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: E0123 06:21:32.353220 4784 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.355048 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.355090 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.355107 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.355136 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.355156 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:32Z","lastTransitionTime":"2026-01-23T06:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.372932 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.389034 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.408155 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.421100 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.441787 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.465592 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.466279 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.466777 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.466795 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.467147 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.467166 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:32Z","lastTransitionTime":"2026-01-23T06:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.481614 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5069fe1f444d91f332095ee10707394c5ba532193fa0b03068a85ec8c6c80916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:21:07Z\\\",\\\"message\\\":\\\"2026-01-23T06:20:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe\\\\n2026-01-23T06:20:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe to /host/opt/cni/bin/\\\\n2026-01-23T06:20:22Z [verbose] multus-daemon started\\\\n2026-01-23T06:20:22Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:21:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:21:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.496427 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c06486b-62a3-42b7-bf37-f6f34149b98e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d09ce02d12d083603fee9257fe237cd2303aeab94fe3b90607b26f0be5a65df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48e47f1bb90ab6043480997acbad39cf3f14c4c303a330d2e0ed9d277813999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bed583aa95c5bdf643c3feff6b814da7a093d0ff638e37ea75bd4ead2ed6625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.515243 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.535233 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.548469 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.570224 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.570488 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.570603 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.570666 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.570720 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:32Z","lastTransitionTime":"2026-01-23T06:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.572275 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:21:16Z\\\",\\\"message\\\":\\\"rver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-network-node-identity/network-node-identity-vrzqb openshift-ovn-kubernetes/ovnkube-node-9652h openshift-machine-config-operator/machine-config-daemon-r7dpd]\\\\nI0123 06:21:16.528475 6791 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 06:21:16.528496 6791 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0123 06:21:16.528536 6791 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0123 06:21:16.528554 6791 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:21:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.584932 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:32Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.674281 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.674339 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.674352 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.674379 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.674396 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:32Z","lastTransitionTime":"2026-01-23T06:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.778003 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.778079 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.778101 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.778127 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.778146 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:32Z","lastTransitionTime":"2026-01-23T06:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.881370 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.881779 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.881850 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.881952 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.882016 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:32Z","lastTransitionTime":"2026-01-23T06:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.985967 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.986538 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.986654 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.986769 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:32 crc kubenswrapper[4784]: I0123 06:21:32.986875 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:32Z","lastTransitionTime":"2026-01-23T06:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.089615 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.089664 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.089676 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.089697 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.089713 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:33Z","lastTransitionTime":"2026-01-23T06:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.192858 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.192944 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.192971 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.193002 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.193029 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:33Z","lastTransitionTime":"2026-01-23T06:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.248160 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:42:05.56872307 +0000 UTC Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.252960 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.252960 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.252968 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.252987 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:33 crc kubenswrapper[4784]: E0123 06:21:33.253530 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:33 crc kubenswrapper[4784]: E0123 06:21:33.253334 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:33 crc kubenswrapper[4784]: E0123 06:21:33.253663 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:33 crc kubenswrapper[4784]: E0123 06:21:33.253185 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.295975 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.296056 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.296071 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.296097 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.296115 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:33Z","lastTransitionTime":"2026-01-23T06:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.399974 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.400044 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.400064 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.400089 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.400111 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:33Z","lastTransitionTime":"2026-01-23T06:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.503148 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.503220 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.503243 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.503273 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.503295 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:33Z","lastTransitionTime":"2026-01-23T06:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.606227 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.606296 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.606325 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.606361 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.606387 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:33Z","lastTransitionTime":"2026-01-23T06:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.710374 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.710414 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.710426 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.710447 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.710458 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:33Z","lastTransitionTime":"2026-01-23T06:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.814297 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.814443 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.814468 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.814497 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.814519 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:33Z","lastTransitionTime":"2026-01-23T06:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.918312 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.918366 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.918376 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.918400 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:33 crc kubenswrapper[4784]: I0123 06:21:33.918414 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:33Z","lastTransitionTime":"2026-01-23T06:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.022833 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.022925 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.022945 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.022976 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.022999 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:34Z","lastTransitionTime":"2026-01-23T06:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.125943 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.125999 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.126012 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.126032 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.126046 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:34Z","lastTransitionTime":"2026-01-23T06:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.229312 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.229378 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.229401 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.229431 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.229453 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:34Z","lastTransitionTime":"2026-01-23T06:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.248934 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:11:19.786703523 +0000 UTC Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.333058 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.333109 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.333128 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.333152 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.333169 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:34Z","lastTransitionTime":"2026-01-23T06:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.436117 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.436176 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.436193 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.436218 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.436235 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:34Z","lastTransitionTime":"2026-01-23T06:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.539151 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.539259 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.539289 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.539323 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.539345 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:34Z","lastTransitionTime":"2026-01-23T06:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.642269 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.642336 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.642344 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.642360 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.642370 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:34Z","lastTransitionTime":"2026-01-23T06:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.745327 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.745402 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.745419 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.745444 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.745459 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:34Z","lastTransitionTime":"2026-01-23T06:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.848805 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.848886 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.848906 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.848941 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.848962 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:34Z","lastTransitionTime":"2026-01-23T06:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.952576 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.952641 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.952660 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.952686 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:34 crc kubenswrapper[4784]: I0123 06:21:34.952703 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:34Z","lastTransitionTime":"2026-01-23T06:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.055698 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.055839 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.055852 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.055900 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.055919 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:35Z","lastTransitionTime":"2026-01-23T06:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.159675 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.159729 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.159742 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.159791 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.159809 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:35Z","lastTransitionTime":"2026-01-23T06:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.249943 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:06:25.820932531 +0000 UTC Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.253343 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.253472 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:35 crc kubenswrapper[4784]: E0123 06:21:35.253547 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.253588 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.253586 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:35 crc kubenswrapper[4784]: E0123 06:21:35.253838 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:35 crc kubenswrapper[4784]: E0123 06:21:35.254329 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:35 crc kubenswrapper[4784]: E0123 06:21:35.254466 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.263233 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.263311 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.263332 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.263368 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.263392 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:35Z","lastTransitionTime":"2026-01-23T06:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.367376 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.367436 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.367453 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.367482 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.367500 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:35Z","lastTransitionTime":"2026-01-23T06:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.471189 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.471248 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.471260 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.471281 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.471298 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:35Z","lastTransitionTime":"2026-01-23T06:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.574854 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.574925 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.574941 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.574961 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.574975 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:35Z","lastTransitionTime":"2026-01-23T06:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.678838 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.678893 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.678907 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.678931 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.678946 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:35Z","lastTransitionTime":"2026-01-23T06:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.782827 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.782910 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.782934 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.782963 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.782986 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:35Z","lastTransitionTime":"2026-01-23T06:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.886384 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.886448 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.886487 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.886509 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.886526 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:35Z","lastTransitionTime":"2026-01-23T06:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.989386 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.989455 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.989473 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.989503 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:35 crc kubenswrapper[4784]: I0123 06:21:35.989527 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:35Z","lastTransitionTime":"2026-01-23T06:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.093017 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.093104 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.093131 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.093160 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.093206 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:36Z","lastTransitionTime":"2026-01-23T06:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.197173 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.197254 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.197274 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.197306 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.197329 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:36Z","lastTransitionTime":"2026-01-23T06:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.250507 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 12:53:53.575113516 +0000 UTC Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.300743 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.300845 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.300857 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.300878 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.300890 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:36Z","lastTransitionTime":"2026-01-23T06:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.403428 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.403479 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.403488 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.403505 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.403517 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:36Z","lastTransitionTime":"2026-01-23T06:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.506520 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.506577 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.506697 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.506726 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.506744 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:36Z","lastTransitionTime":"2026-01-23T06:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.609812 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.609882 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.609901 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.609940 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.609962 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:36Z","lastTransitionTime":"2026-01-23T06:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.713251 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.713326 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.713344 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.713369 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.713393 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:36Z","lastTransitionTime":"2026-01-23T06:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.815944 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.816002 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.816019 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.816043 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.816064 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:36Z","lastTransitionTime":"2026-01-23T06:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.889276 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs\") pod \"network-metrics-daemon-lcdgv\" (UID: \"cdf947ef-7279-4d43-854c-d836e0043e5b\") " pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:36 crc kubenswrapper[4784]: E0123 06:21:36.889575 4784 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:21:36 crc kubenswrapper[4784]: E0123 06:21:36.889704 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs podName:cdf947ef-7279-4d43-854c-d836e0043e5b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:40.889679186 +0000 UTC m=+164.122187160 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs") pod "network-metrics-daemon-lcdgv" (UID: "cdf947ef-7279-4d43-854c-d836e0043e5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.919050 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.919097 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.919118 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.919141 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:36 crc kubenswrapper[4784]: I0123 06:21:36.919158 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:36Z","lastTransitionTime":"2026-01-23T06:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.021429 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.021491 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.021508 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.021527 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.021540 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:37Z","lastTransitionTime":"2026-01-23T06:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.125258 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.125315 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.125336 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.125358 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.125375 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:37Z","lastTransitionTime":"2026-01-23T06:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.229176 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.229242 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.229260 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.229285 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.229304 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:37Z","lastTransitionTime":"2026-01-23T06:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.251010 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:52:10.726194369 +0000 UTC Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.253630 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.253823 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:37 crc kubenswrapper[4784]: E0123 06:21:37.254012 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.253886 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.253925 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:37 crc kubenswrapper[4784]: E0123 06:21:37.254267 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:37 crc kubenswrapper[4784]: E0123 06:21:37.254345 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:37 crc kubenswrapper[4784]: E0123 06:21:37.254502 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.267680 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f9zpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec6438ba-1338-40e2-9746-8cd62c5d0ce4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6647c8a73fe9de16b26c7bd91014649df55cf95b7169293cab842a1e20aa6b53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bwwdc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f9zpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.305649 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ef0442-94bc-46f2-a551-15b59d1a5cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:21:16Z\\\",\\\"message\\\":\\\"rver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-network-node-identity/network-node-identity-vrzqb openshift-ovn-kubernetes/ovnkube-node-9652h openshift-machine-config-operator/machine-config-daemon-r7dpd]\\\\nI0123 06:21:16.528475 6791 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 06:21:16.528496 6791 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0123 06:21:16.528536 6791 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0123 06:21:16.528554 6791 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:21:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5278\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9652h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.320569 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdf947ef-7279-4d43-854c-d836e0043e5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ls7mf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lcdgv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.332640 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.332699 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.332718 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.332743 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.332782 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:37Z","lastTransitionTime":"2026-01-23T06:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.340090 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c06486b-62a3-42b7-bf37-f6f34149b98e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d09ce02d12d083603fee9257fe237cd2303aeab94fe3b90607b26f0be5a65df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48e47f1bb90ab6043480997acbad39cf3f14c4c303a330d2e0ed9d277813999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bed583aa95c5bdf643c3feff6b814da7a093d0ff638e37ea75bd4ead2ed6625\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87a4a20e59fbdd4ff88ca4fb0552248223e9e604d3eac58f556af608abbe36cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.358922 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a24ae053-cd33-45bb-964d-8adb9b05239b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"message\\\":\\\"\\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:20:16.489910 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:20:16.489950 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489959 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:20:16.489967 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:20:16.489971 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:20:16.489976 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:20:16.489980 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:20:16.490249 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0123 06:20:16.493475 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0123 06:20:16.493550 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 06:20:16.493686 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 06:20:16.493705 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 06:20:16.493867 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493998 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0123 06:20:16.493874 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nF0123 06:20:16.494011 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.375582 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.392368 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.409145 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ca03e2c5114ac7d70b22cdd6c17df22751aba37f831002feb6ea6ab89bdfd2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.424837 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c468eb22e62987d4408757ccccfb123a85c5c0b95ec44b81f9d9fbd26d54f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://276401e17e1bf8d31b31144a4b21c62d4948651713dd310deda8a88d6c884875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.435988 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.436054 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.436066 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.436119 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.436135 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:37Z","lastTransitionTime":"2026-01-23T06:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.440095 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce19e3ac-f68d-40a1-b01a-740a09dc59e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c68901300c3b658c09de09b98ca6a4a7afed9ea103659bd02b15ca732ade3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hpb56\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r7dpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.458236 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6ts88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86ce0358-1c71-4b17-80b8-0c930b5356de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67d52da57473fe49b8c685d427bc67360c3f657f4fccaa17b3e1e395d215f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc0d735646eabdd0611d079e1daa641b751b1e904d97938c5556627584a2e63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c50236fab0a71551d5ea7b4b1d3940a4cdf4e78032542ffe7221338f0d8701\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06c70c6112466ffb43e4bb77a0e62c7ab05ba7681ab33df204fd48a374ef4f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45110f9897fcaea01aaaf30862c3d8c7ef48fe0893fbfeafc5c7dd5390221513\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae553de333a1a6bc5ca452e2b1dbeb7525a9957f61a59021c1a272a76ad12496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26408972d29ab839031190e272f2c165f33ca26f0cf8eda5dc7fb2b275a1dd4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:20:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwvzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6ts88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.472264 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9bs27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"294147c4-bce0-4cd5-99bf-d6d63b068c6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8872d39978d1721319d47a46b56c8e2c8c0a717b0d22db5298e92483184dad2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9bs27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.489305 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8563a82-9f1c-4972-843c-4461fef9994d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87f3d637ec58dd16d6a573471f65a49abc2e0570e3b90f0851360abb93fe3d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe48997fa87aa09f9162350da05422b0e9b3fe78655a69f2aa49f86b8866eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrfl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9q9h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.507706 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4d4962-3243-42a1-b3da-bd74505a8daa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8618979ba3708b1ac99d43d05710c9a6a02c38324eb346631098fd885b30c070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a51e56bb323b94bca35e88d39c383775dc63d495c51a4e344e98afcdc4698c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://925053eec8c61071098e53ce2108729f1df9b570f71e3a611ad6ee80b92a76bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.523032 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76311b61-3fe6-478e-8ab1-7a9227351764\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d8d335a55d46d0af562baebd8a838e5306dc05b5307fc63cf8857eace36ff28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0c501824c5bc62076c5e2118342ad5dcddb15c3d7d34cd5905eab840aef4d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0c501824c5bc62076c5e2118342ad5dcddb15c3d7d34cd5905eab840aef4d52\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.539427 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.539675 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.539721 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.539739 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.539794 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.539811 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:37Z","lastTransitionTime":"2026-01-23T06:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.564207 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e342bcc4-f899-472e-9a51-2c29d76b3e63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:19:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89007e8cf7674bfc2b645a449c0fcc427a1618c194597eae2dfe5f3ca09fe7b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1c03e51209263b798c294478dd2b806a2a739d38fb7b7961fb3665164d865b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bbc743f8c8195b9d4acbd12710af60ab6addc0920dc382034de033f049844e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ddeb057409a0f240ae7e791bcfcfd79c7042ffc2b7ee52af0d8b32700831c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cdb980fa6742ed82b0f3fd2deecc58c8d9a10a56ebe4e0dead269554c2db6dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad4423d8dfe5848c14b431eafe754906f35b0e6c5eca18fbd8e1a3fdd124a5cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fedecaeb07c1af996b07ff669fe570248a791e0f108d1bec7aaf72dacef2287d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9395c65b0dc1cf462fbb1b68440f706e3ad6806a6c60c77cc07f9c8e989f628\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:19:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:19:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:19:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.581656 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd7f13e2ca227a21fab78ec3ec21b48048aba91367f04202f5d5b8efc141d047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:20:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.596605 4784 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8cjm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76b58650-2600-48a5-b11e-2ed4503cc6b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:20:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:21:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5069fe1f444d91f332095ee10707394c5ba532193fa0b03068a85ec8c6c80916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:21:07Z\\\",\\\"message\\\":\\\"2026-01-23T06:20:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe\\\\n2026-01-23T06:20:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3328a95c-af77-4978-9258-cb6dca4ae5fe to /host/opt/cni/bin/\\\\n2026-01-23T06:20:22Z [verbose] multus-daemon started\\\\n2026-01-23T06:20:22Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:21:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:20:19Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:21:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhrvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:20:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8cjm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:21:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.642691 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.643092 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.643349 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.643532 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.643703 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:37Z","lastTransitionTime":"2026-01-23T06:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.747443 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.747879 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.748054 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.748193 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.748318 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:37Z","lastTransitionTime":"2026-01-23T06:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.852408 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.852485 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.852500 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.852523 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.852542 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:37Z","lastTransitionTime":"2026-01-23T06:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.956425 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.956514 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.956537 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.956580 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:37 crc kubenswrapper[4784]: I0123 06:21:37.956600 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:37Z","lastTransitionTime":"2026-01-23T06:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.059038 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.059107 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.059186 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.059256 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.059284 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:38Z","lastTransitionTime":"2026-01-23T06:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.162977 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.163041 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.163053 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.163077 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.163093 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:38Z","lastTransitionTime":"2026-01-23T06:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.251882 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 00:06:20.933089717 +0000 UTC Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.266143 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.266224 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.266255 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.266286 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.266308 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:38Z","lastTransitionTime":"2026-01-23T06:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.369272 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.369321 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.369331 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.369351 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.369365 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:38Z","lastTransitionTime":"2026-01-23T06:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.472855 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.472925 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.472940 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.472965 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.472996 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:38Z","lastTransitionTime":"2026-01-23T06:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.575854 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.575908 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.575920 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.575943 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.575958 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:38Z","lastTransitionTime":"2026-01-23T06:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.679062 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.679113 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.679129 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.679148 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.679161 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:38Z","lastTransitionTime":"2026-01-23T06:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.782711 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.782856 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.782885 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.782921 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.782943 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:38Z","lastTransitionTime":"2026-01-23T06:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.887233 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.887313 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.887332 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.887363 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.887385 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:38Z","lastTransitionTime":"2026-01-23T06:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.991065 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.991288 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.991315 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.991351 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:38 crc kubenswrapper[4784]: I0123 06:21:38.991373 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:38Z","lastTransitionTime":"2026-01-23T06:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.094871 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.094947 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.094967 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.095004 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.095025 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:39Z","lastTransitionTime":"2026-01-23T06:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.200554 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.200636 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.200658 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.200687 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.200711 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:39Z","lastTransitionTime":"2026-01-23T06:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.253100 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:42:51.264291212 +0000 UTC Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.253292 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.253342 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:39 crc kubenswrapper[4784]: E0123 06:21:39.253460 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.253499 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.253578 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:39 crc kubenswrapper[4784]: E0123 06:21:39.254026 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:39 crc kubenswrapper[4784]: E0123 06:21:39.254358 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:39 crc kubenswrapper[4784]: E0123 06:21:39.254419 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.304690 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.305074 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.305152 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.305295 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.305370 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:39Z","lastTransitionTime":"2026-01-23T06:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.409174 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.409214 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.409229 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.409246 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.409257 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:39Z","lastTransitionTime":"2026-01-23T06:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.512406 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.512447 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.512457 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.512474 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.512487 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:39Z","lastTransitionTime":"2026-01-23T06:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.616127 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.616175 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.616184 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.616203 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.616214 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:39Z","lastTransitionTime":"2026-01-23T06:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.720113 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.720175 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.720192 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.720223 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.720241 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:39Z","lastTransitionTime":"2026-01-23T06:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.824134 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.824202 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.824222 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.824250 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.824270 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:39Z","lastTransitionTime":"2026-01-23T06:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.928119 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.928185 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.928205 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.928233 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:39 crc kubenswrapper[4784]: I0123 06:21:39.928253 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:39Z","lastTransitionTime":"2026-01-23T06:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.031980 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.032059 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.032085 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.032117 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.032140 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:40Z","lastTransitionTime":"2026-01-23T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.149763 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.150197 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.150273 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.150355 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.150423 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:40Z","lastTransitionTime":"2026-01-23T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.253225 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:25:01.605000958 +0000 UTC Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.254055 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.254105 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.254125 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.254150 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.254172 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:40Z","lastTransitionTime":"2026-01-23T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.357746 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.358164 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.358232 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.358296 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.358370 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:40Z","lastTransitionTime":"2026-01-23T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.461423 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.462129 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.462202 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.462246 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.462277 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:40Z","lastTransitionTime":"2026-01-23T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.566371 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.566418 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.566431 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.566451 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.566463 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:40Z","lastTransitionTime":"2026-01-23T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.670052 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.670100 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.670111 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.670131 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.670150 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:40Z","lastTransitionTime":"2026-01-23T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.773254 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.773297 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.773307 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.773325 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.773337 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:40Z","lastTransitionTime":"2026-01-23T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.876144 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.876204 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.876220 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.876246 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.876263 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:40Z","lastTransitionTime":"2026-01-23T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.979500 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.979560 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.979576 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.979594 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:40 crc kubenswrapper[4784]: I0123 06:21:40.979606 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:40Z","lastTransitionTime":"2026-01-23T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.084500 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.084571 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.084590 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.084618 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.084638 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:41Z","lastTransitionTime":"2026-01-23T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.188517 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.188610 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.188638 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.188677 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.188700 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:41Z","lastTransitionTime":"2026-01-23T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.253490 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:39:56.865019948 +0000 UTC Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.253656 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.253684 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.253888 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:41 crc kubenswrapper[4784]: E0123 06:21:41.254058 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.254113 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:41 crc kubenswrapper[4784]: E0123 06:21:41.254411 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:41 crc kubenswrapper[4784]: E0123 06:21:41.254381 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:41 crc kubenswrapper[4784]: E0123 06:21:41.254683 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.291793 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.291868 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.291906 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.291936 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.291959 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:41Z","lastTransitionTime":"2026-01-23T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.395795 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.395866 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.395885 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.395912 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.395931 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:41Z","lastTransitionTime":"2026-01-23T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.499648 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.499733 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.499860 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.499899 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.499925 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:41Z","lastTransitionTime":"2026-01-23T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.604052 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.604151 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.604187 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.604220 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.604244 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:41Z","lastTransitionTime":"2026-01-23T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.707095 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.707153 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.707167 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.707190 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.707206 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:41Z","lastTransitionTime":"2026-01-23T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.811386 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.811557 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.811579 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.811658 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.811684 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:41Z","lastTransitionTime":"2026-01-23T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.915268 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.915330 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.915348 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.915376 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:41 crc kubenswrapper[4784]: I0123 06:21:41.915396 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:41Z","lastTransitionTime":"2026-01-23T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.019165 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.019242 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.019267 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.019299 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.019325 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:42Z","lastTransitionTime":"2026-01-23T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.123102 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.123185 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.123200 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.123228 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.123246 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:42Z","lastTransitionTime":"2026-01-23T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.226447 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.226519 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.226529 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.226551 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.226562 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:42Z","lastTransitionTime":"2026-01-23T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.253661 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:14:08.264404623 +0000 UTC Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.330417 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.330513 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.330535 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.330659 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.330743 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:42Z","lastTransitionTime":"2026-01-23T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.433440 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.433495 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.433507 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.433525 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.433541 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:42Z","lastTransitionTime":"2026-01-23T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.537173 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.537275 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.537289 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.537340 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.537355 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:42Z","lastTransitionTime":"2026-01-23T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.579266 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.579313 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.579324 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.579345 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.579360 4784 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:21:42Z","lastTransitionTime":"2026-01-23T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.649237 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw"] Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.650057 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.652184 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.652190 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.653174 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.653593 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.720003 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=84.719947353 podStartE2EDuration="1m24.719947353s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:21:42.697581367 +0000 UTC m=+105.930089401" watchObservedRunningTime="2026-01-23 06:21:42.719947353 +0000 UTC m=+105.952455327" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.741563 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-8cjm4" podStartSLOduration=84.741533873 podStartE2EDuration="1m24.741533873s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:21:42.741196696 +0000 UTC m=+105.973704680" watchObservedRunningTime="2026-01-23 06:21:42.741533873 +0000 UTC m=+105.974041857" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.766061 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4bd8eb97-0590-4082-9089-b1fe05ec3d82-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bqsvw\" (UID: \"4bd8eb97-0590-4082-9089-b1fe05ec3d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.766132 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bd8eb97-0590-4082-9089-b1fe05ec3d82-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bqsvw\" (UID: \"4bd8eb97-0590-4082-9089-b1fe05ec3d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.766232 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4bd8eb97-0590-4082-9089-b1fe05ec3d82-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bqsvw\" (UID: \"4bd8eb97-0590-4082-9089-b1fe05ec3d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.766369 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4bd8eb97-0590-4082-9089-b1fe05ec3d82-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bqsvw\" (UID: \"4bd8eb97-0590-4082-9089-b1fe05ec3d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.766434 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bd8eb97-0590-4082-9089-b1fe05ec3d82-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bqsvw\" (UID: \"4bd8eb97-0590-4082-9089-b1fe05ec3d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.785196 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=85.785176192 podStartE2EDuration="1m25.785176192s" podCreationTimestamp="2026-01-23 06:20:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:21:42.78510593 +0000 UTC m=+106.017613904" watchObservedRunningTime="2026-01-23 06:21:42.785176192 +0000 UTC m=+106.017684186" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.785416 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.785410338 podStartE2EDuration="59.785410338s" podCreationTimestamp="2026-01-23 06:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:21:42.761502759 +0000 UTC m=+105.994010753" watchObservedRunningTime="2026-01-23 06:21:42.785410338 +0000 UTC m=+106.017918322" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.846277 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-f9zpg" podStartSLOduration=84.846247953 podStartE2EDuration="1m24.846247953s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:21:42.819127276 +0000 UTC m=+106.051635240" watchObservedRunningTime="2026-01-23 06:21:42.846247953 +0000 UTC m=+106.078755927" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.867706 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bd8eb97-0590-4082-9089-b1fe05ec3d82-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bqsvw\" (UID: \"4bd8eb97-0590-4082-9089-b1fe05ec3d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.867807 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4bd8eb97-0590-4082-9089-b1fe05ec3d82-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bqsvw\" (UID: \"4bd8eb97-0590-4082-9089-b1fe05ec3d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.867849 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bd8eb97-0590-4082-9089-b1fe05ec3d82-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bqsvw\" (UID: \"4bd8eb97-0590-4082-9089-b1fe05ec3d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.867870 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4bd8eb97-0590-4082-9089-b1fe05ec3d82-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bqsvw\" (UID: \"4bd8eb97-0590-4082-9089-b1fe05ec3d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.867919 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4bd8eb97-0590-4082-9089-b1fe05ec3d82-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bqsvw\" (UID: \"4bd8eb97-0590-4082-9089-b1fe05ec3d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.868002 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4bd8eb97-0590-4082-9089-b1fe05ec3d82-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bqsvw\" (UID: \"4bd8eb97-0590-4082-9089-b1fe05ec3d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.868068 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4bd8eb97-0590-4082-9089-b1fe05ec3d82-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bqsvw\" (UID: \"4bd8eb97-0590-4082-9089-b1fe05ec3d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.869249 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4bd8eb97-0590-4082-9089-b1fe05ec3d82-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bqsvw\" (UID: \"4bd8eb97-0590-4082-9089-b1fe05ec3d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.881735 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bd8eb97-0590-4082-9089-b1fe05ec3d82-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bqsvw\" (UID: \"4bd8eb97-0590-4082-9089-b1fe05ec3d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.886744 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bd8eb97-0590-4082-9089-b1fe05ec3d82-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bqsvw\" (UID: \"4bd8eb97-0590-4082-9089-b1fe05ec3d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.961476 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=86.961454896 podStartE2EDuration="1m26.961454896s" podCreationTimestamp="2026-01-23 06:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:21:42.96115945 +0000 UTC m=+106.193667424" watchObservedRunningTime="2026-01-23 06:21:42.961454896 +0000 UTC m=+106.193962870" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.962013 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9q9h5" podStartSLOduration=84.962003398 podStartE2EDuration="1m24.962003398s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:21:42.941832288 +0000 UTC m=+106.174340262" watchObservedRunningTime="2026-01-23 06:21:42.962003398 +0000 UTC m=+106.194511372" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.973436 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=25.973413441 podStartE2EDuration="25.973413441s" podCreationTimestamp="2026-01-23 06:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:21:42.972827928 +0000 UTC m=+106.205335912" watchObservedRunningTime="2026-01-23 06:21:42.973413441 +0000 UTC m=+106.205921405" Jan 23 06:21:42 crc kubenswrapper[4784]: I0123 06:21:42.982281 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" Jan 23 06:21:43 crc kubenswrapper[4784]: I0123 06:21:43.032163 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-6ts88" podStartSLOduration=85.032135652 podStartE2EDuration="1m25.032135652s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:21:43.031731403 +0000 UTC m=+106.264239377" watchObservedRunningTime="2026-01-23 06:21:43.032135652 +0000 UTC m=+106.264643626" Jan 23 06:21:43 crc kubenswrapper[4784]: I0123 06:21:43.032696 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podStartSLOduration=85.032688813 podStartE2EDuration="1m25.032688813s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:21:43.010430899 +0000 UTC m=+106.242938893" watchObservedRunningTime="2026-01-23 06:21:43.032688813 +0000 UTC m=+106.265196797" Jan 23 06:21:43 crc kubenswrapper[4784]: I0123 06:21:43.044412 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9bs27" podStartSLOduration=85.044390283 podStartE2EDuration="1m25.044390283s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:21:43.044217228 +0000 UTC m=+106.276725202" watchObservedRunningTime="2026-01-23 06:21:43.044390283 +0000 UTC m=+106.276898257" Jan 23 06:21:43 crc kubenswrapper[4784]: I0123 06:21:43.252682 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:43 crc kubenswrapper[4784]: I0123 06:21:43.252742 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:43 crc kubenswrapper[4784]: I0123 06:21:43.252941 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:43 crc kubenswrapper[4784]: E0123 06:21:43.253018 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:43 crc kubenswrapper[4784]: I0123 06:21:43.253051 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:43 crc kubenswrapper[4784]: E0123 06:21:43.253230 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:43 crc kubenswrapper[4784]: E0123 06:21:43.253302 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:43 crc kubenswrapper[4784]: E0123 06:21:43.253485 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:43 crc kubenswrapper[4784]: I0123 06:21:43.254781 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:46:18.951493844 +0000 UTC Jan 23 06:21:43 crc kubenswrapper[4784]: I0123 06:21:43.254842 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 23 06:21:43 crc kubenswrapper[4784]: I0123 06:21:43.268189 4784 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 06:21:43 crc kubenswrapper[4784]: I0123 06:21:43.966814 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" event={"ID":"4bd8eb97-0590-4082-9089-b1fe05ec3d82","Type":"ContainerStarted","Data":"33ec0a5a9eb9bc0f9a8659385091844871c63e5b5079104e17e3c9ee0844ff16"} Jan 23 06:21:43 crc kubenswrapper[4784]: I0123 06:21:43.966895 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" event={"ID":"4bd8eb97-0590-4082-9089-b1fe05ec3d82","Type":"ContainerStarted","Data":"45e8261333155ab0cfa0945f775d73a85abbba7bf2b6cd04c1d2807894193823"} Jan 23 06:21:43 crc kubenswrapper[4784]: I0123 06:21:43.984885 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bqsvw" podStartSLOduration=85.98485912 podStartE2EDuration="1m25.98485912s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:21:43.983696265 +0000 UTC m=+107.216204279" watchObservedRunningTime="2026-01-23 06:21:43.98485912 +0000 UTC m=+107.217367104" Jan 23 06:21:45 crc kubenswrapper[4784]: I0123 06:21:45.253658 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:45 crc kubenswrapper[4784]: I0123 06:21:45.253737 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:45 crc kubenswrapper[4784]: I0123 06:21:45.253738 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:45 crc kubenswrapper[4784]: I0123 06:21:45.253899 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:45 crc kubenswrapper[4784]: E0123 06:21:45.253907 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:45 crc kubenswrapper[4784]: E0123 06:21:45.254075 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:45 crc kubenswrapper[4784]: E0123 06:21:45.254161 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:45 crc kubenswrapper[4784]: E0123 06:21:45.254347 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:46 crc kubenswrapper[4784]: I0123 06:21:46.254322 4784 scope.go:117] "RemoveContainer" containerID="960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f" Jan 23 06:21:46 crc kubenswrapper[4784]: E0123 06:21:46.254681 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9652h_openshift-ovn-kubernetes(73ef0442-94bc-46f2-a551-15b59d1a5cf0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" Jan 23 06:21:47 crc kubenswrapper[4784]: I0123 06:21:47.253040 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:47 crc kubenswrapper[4784]: I0123 06:21:47.253251 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:47 crc kubenswrapper[4784]: I0123 06:21:47.253352 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:47 crc kubenswrapper[4784]: I0123 06:21:47.254492 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:47 crc kubenswrapper[4784]: E0123 06:21:47.254484 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:47 crc kubenswrapper[4784]: E0123 06:21:47.254630 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:47 crc kubenswrapper[4784]: E0123 06:21:47.254914 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:47 crc kubenswrapper[4784]: E0123 06:21:47.255017 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:49 crc kubenswrapper[4784]: I0123 06:21:49.253549 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:49 crc kubenswrapper[4784]: I0123 06:21:49.253682 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:49 crc kubenswrapper[4784]: I0123 06:21:49.253682 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:49 crc kubenswrapper[4784]: I0123 06:21:49.253706 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:49 crc kubenswrapper[4784]: E0123 06:21:49.253845 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:49 crc kubenswrapper[4784]: E0123 06:21:49.253972 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:49 crc kubenswrapper[4784]: E0123 06:21:49.254044 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:49 crc kubenswrapper[4784]: E0123 06:21:49.254148 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:51 crc kubenswrapper[4784]: I0123 06:21:51.253537 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:51 crc kubenswrapper[4784]: I0123 06:21:51.253648 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:51 crc kubenswrapper[4784]: E0123 06:21:51.253728 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:51 crc kubenswrapper[4784]: I0123 06:21:51.253713 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:51 crc kubenswrapper[4784]: E0123 06:21:51.253910 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:51 crc kubenswrapper[4784]: I0123 06:21:51.254242 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:51 crc kubenswrapper[4784]: E0123 06:21:51.254168 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:51 crc kubenswrapper[4784]: E0123 06:21:51.254364 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:53 crc kubenswrapper[4784]: I0123 06:21:53.253033 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:53 crc kubenswrapper[4784]: I0123 06:21:53.253071 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:53 crc kubenswrapper[4784]: I0123 06:21:53.253134 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:53 crc kubenswrapper[4784]: I0123 06:21:53.253880 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:53 crc kubenswrapper[4784]: E0123 06:21:53.254022 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:53 crc kubenswrapper[4784]: E0123 06:21:53.254157 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:53 crc kubenswrapper[4784]: E0123 06:21:53.254233 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:53 crc kubenswrapper[4784]: E0123 06:21:53.254291 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:54 crc kubenswrapper[4784]: I0123 06:21:54.014083 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8cjm4_76b58650-2600-48a5-b11e-2ed4503cc6b2/kube-multus/1.log" Jan 23 06:21:54 crc kubenswrapper[4784]: I0123 06:21:54.015308 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8cjm4_76b58650-2600-48a5-b11e-2ed4503cc6b2/kube-multus/0.log" Jan 23 06:21:54 crc kubenswrapper[4784]: I0123 06:21:54.015369 4784 generic.go:334] "Generic (PLEG): container finished" podID="76b58650-2600-48a5-b11e-2ed4503cc6b2" containerID="5069fe1f444d91f332095ee10707394c5ba532193fa0b03068a85ec8c6c80916" exitCode=1 Jan 23 06:21:54 crc kubenswrapper[4784]: I0123 06:21:54.015407 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8cjm4" event={"ID":"76b58650-2600-48a5-b11e-2ed4503cc6b2","Type":"ContainerDied","Data":"5069fe1f444d91f332095ee10707394c5ba532193fa0b03068a85ec8c6c80916"} Jan 23 06:21:54 crc kubenswrapper[4784]: I0123 06:21:54.015467 4784 scope.go:117] "RemoveContainer" containerID="373301083bbbce8e56af8ee0ffe69cd9729e5d2d32d50e6f99a4f042ec1a1953" Jan 23 06:21:54 crc kubenswrapper[4784]: I0123 06:21:54.015980 4784 scope.go:117] "RemoveContainer" containerID="5069fe1f444d91f332095ee10707394c5ba532193fa0b03068a85ec8c6c80916" Jan 23 06:21:54 crc kubenswrapper[4784]: E0123 06:21:54.016175 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-8cjm4_openshift-multus(76b58650-2600-48a5-b11e-2ed4503cc6b2)\"" pod="openshift-multus/multus-8cjm4" podUID="76b58650-2600-48a5-b11e-2ed4503cc6b2" Jan 23 06:21:55 crc kubenswrapper[4784]: I0123 06:21:55.019341 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8cjm4_76b58650-2600-48a5-b11e-2ed4503cc6b2/kube-multus/1.log" Jan 23 06:21:55 crc kubenswrapper[4784]: I0123 06:21:55.253036 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:55 crc kubenswrapper[4784]: I0123 06:21:55.253155 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:55 crc kubenswrapper[4784]: I0123 06:21:55.253263 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:55 crc kubenswrapper[4784]: I0123 06:21:55.253274 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:55 crc kubenswrapper[4784]: E0123 06:21:55.253263 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:55 crc kubenswrapper[4784]: E0123 06:21:55.253394 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:55 crc kubenswrapper[4784]: E0123 06:21:55.253673 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:55 crc kubenswrapper[4784]: E0123 06:21:55.253745 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:57 crc kubenswrapper[4784]: E0123 06:21:57.176344 4784 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 23 06:21:57 crc kubenswrapper[4784]: I0123 06:21:57.253617 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:57 crc kubenswrapper[4784]: I0123 06:21:57.253650 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:57 crc kubenswrapper[4784]: I0123 06:21:57.253624 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:57 crc kubenswrapper[4784]: E0123 06:21:57.255374 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:21:57 crc kubenswrapper[4784]: I0123 06:21:57.255492 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:57 crc kubenswrapper[4784]: E0123 06:21:57.255680 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:57 crc kubenswrapper[4784]: E0123 06:21:57.255742 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:57 crc kubenswrapper[4784]: E0123 06:21:57.255837 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:57 crc kubenswrapper[4784]: I0123 06:21:57.256954 4784 scope.go:117] "RemoveContainer" containerID="960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f" Jan 23 06:21:57 crc kubenswrapper[4784]: E0123 06:21:57.369351 4784 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 06:21:58 crc kubenswrapper[4784]: I0123 06:21:58.033593 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovnkube-controller/3.log" Jan 23 06:21:58 crc kubenswrapper[4784]: I0123 06:21:58.035897 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerStarted","Data":"5dbbbec10ed1c6d67d24d383d9860743c982df36bdab505bb77409c5c9a0aa5b"} Jan 23 06:21:58 crc kubenswrapper[4784]: I0123 06:21:58.037581 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:21:58 crc kubenswrapper[4784]: I0123 06:21:58.970812 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podStartSLOduration=100.97077576 podStartE2EDuration="1m40.97077576s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:21:58.079893139 +0000 UTC m=+121.312401123" watchObservedRunningTime="2026-01-23 06:21:58.97077576 +0000 UTC m=+122.203283754" Jan 23 06:21:58 crc kubenswrapper[4784]: I0123 06:21:58.972486 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lcdgv"] Jan 23 06:21:58 crc kubenswrapper[4784]: I0123 06:21:58.972686 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:21:58 crc kubenswrapper[4784]: E0123 06:21:58.972885 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:21:59 crc kubenswrapper[4784]: I0123 06:21:59.253452 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:21:59 crc kubenswrapper[4784]: I0123 06:21:59.253483 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:21:59 crc kubenswrapper[4784]: E0123 06:21:59.253642 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:21:59 crc kubenswrapper[4784]: I0123 06:21:59.253678 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:21:59 crc kubenswrapper[4784]: E0123 06:21:59.253828 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:21:59 crc kubenswrapper[4784]: E0123 06:21:59.253916 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:22:01 crc kubenswrapper[4784]: I0123 06:22:01.253449 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:22:01 crc kubenswrapper[4784]: I0123 06:22:01.253463 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:22:01 crc kubenswrapper[4784]: I0123 06:22:01.253623 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:22:01 crc kubenswrapper[4784]: I0123 06:22:01.253995 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:22:01 crc kubenswrapper[4784]: E0123 06:22:01.254011 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:22:01 crc kubenswrapper[4784]: E0123 06:22:01.254147 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:22:01 crc kubenswrapper[4784]: E0123 06:22:01.254259 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:22:01 crc kubenswrapper[4784]: E0123 06:22:01.254490 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:22:02 crc kubenswrapper[4784]: E0123 06:22:02.370922 4784 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 06:22:03 crc kubenswrapper[4784]: I0123 06:22:03.253473 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:22:03 crc kubenswrapper[4784]: I0123 06:22:03.253535 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:22:03 crc kubenswrapper[4784]: I0123 06:22:03.253516 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:22:03 crc kubenswrapper[4784]: I0123 06:22:03.253645 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:22:03 crc kubenswrapper[4784]: E0123 06:22:03.253727 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:22:03 crc kubenswrapper[4784]: E0123 06:22:03.253946 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:22:03 crc kubenswrapper[4784]: E0123 06:22:03.254108 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:22:03 crc kubenswrapper[4784]: E0123 06:22:03.254219 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:22:05 crc kubenswrapper[4784]: I0123 06:22:05.252727 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:22:05 crc kubenswrapper[4784]: E0123 06:22:05.253319 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:22:05 crc kubenswrapper[4784]: I0123 06:22:05.252779 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:22:05 crc kubenswrapper[4784]: I0123 06:22:05.252967 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:22:05 crc kubenswrapper[4784]: E0123 06:22:05.253741 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:22:05 crc kubenswrapper[4784]: I0123 06:22:05.252816 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:22:05 crc kubenswrapper[4784]: E0123 06:22:05.253959 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:22:05 crc kubenswrapper[4784]: E0123 06:22:05.254005 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:22:07 crc kubenswrapper[4784]: I0123 06:22:07.253646 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:22:07 crc kubenswrapper[4784]: I0123 06:22:07.253735 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:22:07 crc kubenswrapper[4784]: I0123 06:22:07.253828 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:22:07 crc kubenswrapper[4784]: I0123 06:22:07.253880 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:22:07 crc kubenswrapper[4784]: E0123 06:22:07.256405 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:22:07 crc kubenswrapper[4784]: E0123 06:22:07.256587 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:22:07 crc kubenswrapper[4784]: E0123 06:22:07.256868 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:22:07 crc kubenswrapper[4784]: E0123 06:22:07.256997 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:22:07 crc kubenswrapper[4784]: E0123 06:22:07.371987 4784 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 06:22:08 crc kubenswrapper[4784]: I0123 06:22:08.253947 4784 scope.go:117] "RemoveContainer" containerID="5069fe1f444d91f332095ee10707394c5ba532193fa0b03068a85ec8c6c80916" Jan 23 06:22:09 crc kubenswrapper[4784]: I0123 06:22:09.083409 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8cjm4_76b58650-2600-48a5-b11e-2ed4503cc6b2/kube-multus/1.log" Jan 23 06:22:09 crc kubenswrapper[4784]: I0123 06:22:09.083496 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8cjm4" event={"ID":"76b58650-2600-48a5-b11e-2ed4503cc6b2","Type":"ContainerStarted","Data":"8817814ff7fb7c0b8c339672e8721ca0f715332899fe5f1a0161e291413add1f"} Jan 23 06:22:09 crc kubenswrapper[4784]: I0123 06:22:09.255846 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:22:09 crc kubenswrapper[4784]: E0123 06:22:09.255987 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:22:09 crc kubenswrapper[4784]: I0123 06:22:09.256119 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:22:09 crc kubenswrapper[4784]: I0123 06:22:09.256159 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:22:09 crc kubenswrapper[4784]: I0123 06:22:09.256083 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:22:09 crc kubenswrapper[4784]: E0123 06:22:09.256275 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:22:09 crc kubenswrapper[4784]: E0123 06:22:09.256542 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:22:09 crc kubenswrapper[4784]: E0123 06:22:09.256635 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:22:11 crc kubenswrapper[4784]: I0123 06:22:11.253648 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:22:11 crc kubenswrapper[4784]: I0123 06:22:11.253814 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:22:11 crc kubenswrapper[4784]: E0123 06:22:11.253871 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:22:11 crc kubenswrapper[4784]: I0123 06:22:11.254015 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:22:11 crc kubenswrapper[4784]: E0123 06:22:11.254184 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:22:11 crc kubenswrapper[4784]: I0123 06:22:11.254220 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:22:11 crc kubenswrapper[4784]: E0123 06:22:11.254357 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:22:11 crc kubenswrapper[4784]: E0123 06:22:11.254466 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lcdgv" podUID="cdf947ef-7279-4d43-854c-d836e0043e5b" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.253506 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.253594 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.253527 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.253676 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.257273 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.257584 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.258029 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.258190 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.258474 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.258573 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.386535 4784 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.431734 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4r4ds"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.432262 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4wkg9"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.432583 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hhgpf"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.433255 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.436209 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.436535 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.440408 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.440660 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.440846 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ltcmm"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.445302 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.445342 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.445396 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.452382 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pwcvq"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.452954 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.454284 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.455112 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.455396 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.484912 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.485224 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-skjzx"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.485876 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.486265 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.486968 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.487512 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.487807 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.487832 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.488064 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.488193 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.488204 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.488307 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.488410 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.488456 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.488223 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xqdqx"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.488467 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.488420 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.488526 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.488841 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.488860 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.488586 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.488651 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.490005 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.494377 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.494588 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.494607 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.494775 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.494918 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.495393 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.495444 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.495545 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.495681 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.496162 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.496840 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.497021 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.497506 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.497679 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.497786 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.497959 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.498075 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.497971 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.498482 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.507989 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.508314 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.508470 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.508669 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.508805 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.509814 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bb5s2"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.509926 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.510386 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.510396 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.510720 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.522360 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bb5s2" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.522802 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.524148 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.524311 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.524446 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.524588 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.524777 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.526573 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.528198 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.528225 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.528857 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.529092 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.529187 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.529314 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.529578 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.529683 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.529769 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.529898 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.530087 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.530297 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.530414 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.530941 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.529321 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.531687 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.536180 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.536690 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.536773 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.536870 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.542017 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.545475 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.545580 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsm4w\" (UniqueName: \"kubernetes.io/projected/709308c5-9977-4e05-98f0-b745c298db67-kube-api-access-bsm4w\") pod \"route-controller-manager-6576b87f9c-h2hn7\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.545618 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7mwk\" (UniqueName: \"kubernetes.io/projected/4cbb22dd-2c0b-4be3-80b5-affe170bb787-kube-api-access-j7mwk\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.545648 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb1480f7-5616-46a7-a37f-479f33615b7f-service-ca-bundle\") pod \"authentication-operator-69f744f599-pwcvq\" (UID: \"fb1480f7-5616-46a7-a37f-479f33615b7f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.545675 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc182930-d86c-46a4-b3fd-493ef396e20b-machine-approver-tls\") pod \"machine-approver-56656f9798-7k2sb\" (UID: \"fc182930-d86c-46a4-b3fd-493ef396e20b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.545706 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.545741 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-client-ca\") pod \"controller-manager-879f6c89f-4r4ds\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.545798 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpjn9\" (UniqueName: \"kubernetes.io/projected/85a2a44a-7e65-45f7-bd20-b895f5f09c73-kube-api-access-hpjn9\") pod \"controller-manager-879f6c89f-4r4ds\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.545828 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51a0574a-18f3-4fea-b3c9-ed345668f240-serving-cert\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.545879 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/51a0574a-18f3-4fea-b3c9-ed345668f240-encryption-config\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.571823 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.572320 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fsrlb"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.572627 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l7559"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.573119 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fzmbh"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.573271 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-l7559" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.573400 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.573597 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.573789 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.573860 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.573273 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.575536 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.576108 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.576293 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.576342 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.576297 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hhgpf"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.545944 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmq7m\" (UniqueName: \"kubernetes.io/projected/f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39-kube-api-access-cmq7m\") pod \"openshift-config-operator-7777fb866f-skjzx\" (UID: \"f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577195 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4cbb22dd-2c0b-4be3-80b5-affe170bb787-etcd-serving-ca\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577220 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4cbb22dd-2c0b-4be3-80b5-affe170bb787-encryption-config\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577248 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577270 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577293 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577312 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba69339a-1102-4a25-ae4e-a70b643e6ff1-config\") pod \"machine-api-operator-5694c8668f-ltcmm\" (UID: \"ba69339a-1102-4a25-ae4e-a70b643e6ff1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577336 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba69339a-1102-4a25-ae4e-a70b643e6ff1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ltcmm\" (UID: \"ba69339a-1102-4a25-ae4e-a70b643e6ff1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577361 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hd8c\" (UniqueName: \"kubernetes.io/projected/32f1325e-ec9d-4375-855d-970361b2ac03-kube-api-access-7hd8c\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577385 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cbb22dd-2c0b-4be3-80b5-affe170bb787-serving-cert\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577403 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/51a0574a-18f3-4fea-b3c9-ed345668f240-etcd-client\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577422 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577439 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cbb22dd-2c0b-4be3-80b5-affe170bb787-config\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577489 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/51a0574a-18f3-4fea-b3c9-ed345668f240-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577507 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4cbb22dd-2c0b-4be3-80b5-affe170bb787-audit-dir\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577537 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32f1325e-ec9d-4375-855d-970361b2ac03-audit-dir\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577556 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtftg\" (UniqueName: \"kubernetes.io/projected/fc182930-d86c-46a4-b3fd-493ef396e20b-kube-api-access-rtftg\") pod \"machine-approver-56656f9798-7k2sb\" (UID: \"fc182930-d86c-46a4-b3fd-493ef396e20b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577575 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51a0574a-18f3-4fea-b3c9-ed345668f240-audit-dir\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577598 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577615 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577631 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc182930-d86c-46a4-b3fd-493ef396e20b-auth-proxy-config\") pod \"machine-approver-56656f9798-7k2sb\" (UID: \"fc182930-d86c-46a4-b3fd-493ef396e20b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577648 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ba69339a-1102-4a25-ae4e-a70b643e6ff1-images\") pod \"machine-api-operator-5694c8668f-ltcmm\" (UID: \"ba69339a-1102-4a25-ae4e-a70b643e6ff1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577672 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-audit-policies\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577690 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4r4ds\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577707 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb1480f7-5616-46a7-a37f-479f33615b7f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pwcvq\" (UID: \"fb1480f7-5616-46a7-a37f-479f33615b7f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577727 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51a0574a-18f3-4fea-b3c9-ed345668f240-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577744 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb1480f7-5616-46a7-a37f-479f33615b7f-config\") pod \"authentication-operator-69f744f599-pwcvq\" (UID: \"fb1480f7-5616-46a7-a37f-479f33615b7f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577781 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/709308c5-9977-4e05-98f0-b745c298db67-client-ca\") pod \"route-controller-manager-6576b87f9c-h2hn7\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577798 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb1480f7-5616-46a7-a37f-479f33615b7f-serving-cert\") pod \"authentication-operator-69f744f599-pwcvq\" (UID: \"fb1480f7-5616-46a7-a37f-479f33615b7f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577818 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577833 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cbb22dd-2c0b-4be3-80b5-affe170bb787-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577857 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577874 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4cbb22dd-2c0b-4be3-80b5-affe170bb787-audit\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577894 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4v6d\" (UniqueName: \"kubernetes.io/projected/51a0574a-18f3-4fea-b3c9-ed345668f240-kube-api-access-s4v6d\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577912 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a2a44a-7e65-45f7-bd20-b895f5f09c73-serving-cert\") pod \"controller-manager-879f6c89f-4r4ds\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577960 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc182930-d86c-46a4-b3fd-493ef396e20b-config\") pod \"machine-approver-56656f9798-7k2sb\" (UID: \"fc182930-d86c-46a4-b3fd-493ef396e20b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577975 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4cbb22dd-2c0b-4be3-80b5-affe170bb787-node-pullsecrets\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577994 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9hcs\" (UniqueName: \"kubernetes.io/projected/ba69339a-1102-4a25-ae4e-a70b643e6ff1-kube-api-access-x9hcs\") pod \"machine-api-operator-5694c8668f-ltcmm\" (UID: \"ba69339a-1102-4a25-ae4e-a70b643e6ff1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.578010 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39-available-featuregates\") pod \"openshift-config-operator-7777fb866f-skjzx\" (UID: \"f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577136 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577573 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.577690 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.578267 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.578309 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-config\") pod \"controller-manager-879f6c89f-4r4ds\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.578332 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709308c5-9977-4e05-98f0-b745c298db67-config\") pod \"route-controller-manager-6576b87f9c-h2hn7\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.578349 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39-serving-cert\") pod \"openshift-config-operator-7777fb866f-skjzx\" (UID: \"f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.578372 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9z6q\" (UniqueName: \"kubernetes.io/projected/fb1480f7-5616-46a7-a37f-479f33615b7f-kube-api-access-k9z6q\") pod \"authentication-operator-69f744f599-pwcvq\" (UID: \"fb1480f7-5616-46a7-a37f-479f33615b7f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.578405 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/709308c5-9977-4e05-98f0-b745c298db67-serving-cert\") pod \"route-controller-manager-6576b87f9c-h2hn7\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.578460 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4cbb22dd-2c0b-4be3-80b5-affe170bb787-image-import-ca\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.578488 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4cbb22dd-2c0b-4be3-80b5-affe170bb787-etcd-client\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.578506 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51a0574a-18f3-4fea-b3c9-ed345668f240-audit-policies\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.579795 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.580357 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.581199 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4r4ds"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.581315 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.581383 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.586623 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.587583 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ltcmm"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.587650 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4wkg9"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.587664 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-gvzxz"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.588318 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.592953 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.593271 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.593773 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.593836 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.593965 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xqdqx"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.594101 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.594175 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.594205 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.594418 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.594872 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.595009 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.595018 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.595164 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.595199 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.595074 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.595533 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.595561 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.596000 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.596575 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.596583 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.596625 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.596960 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-2stcb"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.597708 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.597911 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dst6k"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.598927 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dst6k" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.601980 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.605855 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.606573 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.607804 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.615981 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.620129 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.621015 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.621293 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.621661 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.622012 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.638776 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.644234 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.644464 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.650201 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pfwg8"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.653323 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-r4srv"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.654692 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.655730 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfwg8" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.656107 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4srv" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.656245 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.659611 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.661207 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.663279 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.666225 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.668540 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.675505 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6zvvc"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.676514 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6zvvc" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.677246 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-czr5t"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.677880 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-czr5t" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679266 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4cbb22dd-2c0b-4be3-80b5-affe170bb787-audit-dir\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679310 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32f1325e-ec9d-4375-855d-970361b2ac03-audit-dir\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679334 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtftg\" (UniqueName: \"kubernetes.io/projected/fc182930-d86c-46a4-b3fd-493ef396e20b-kube-api-access-rtftg\") pod \"machine-approver-56656f9798-7k2sb\" (UID: \"fc182930-d86c-46a4-b3fd-493ef396e20b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679358 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51a0574a-18f3-4fea-b3c9-ed345668f240-audit-dir\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679378 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679396 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679415 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc182930-d86c-46a4-b3fd-493ef396e20b-auth-proxy-config\") pod \"machine-approver-56656f9798-7k2sb\" (UID: \"fc182930-d86c-46a4-b3fd-493ef396e20b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679452 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea1f6b10-9910-420e-96c7-cfd389d931c4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-65wmn\" (UID: \"ea1f6b10-9910-420e-96c7-cfd389d931c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679472 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dbf9ccb-15be-4b0f-bf67-8638a57bb848-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbckc\" (UID: \"3dbf9ccb-15be-4b0f-bf67-8638a57bb848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679490 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ba69339a-1102-4a25-ae4e-a70b643e6ff1-images\") pod \"machine-api-operator-5694c8668f-ltcmm\" (UID: \"ba69339a-1102-4a25-ae4e-a70b643e6ff1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679509 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4r4ds\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679547 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb1480f7-5616-46a7-a37f-479f33615b7f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pwcvq\" (UID: \"fb1480f7-5616-46a7-a37f-479f33615b7f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679567 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-audit-policies\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679588 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb1480f7-5616-46a7-a37f-479f33615b7f-config\") pod \"authentication-operator-69f744f599-pwcvq\" (UID: \"fb1480f7-5616-46a7-a37f-479f33615b7f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679605 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51a0574a-18f3-4fea-b3c9-ed345668f240-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679630 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/709308c5-9977-4e05-98f0-b745c298db67-client-ca\") pod \"route-controller-manager-6576b87f9c-h2hn7\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679653 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0d1c5a4a-d067-4ab8-b623-82a192c3bb07-default-certificate\") pod \"router-default-5444994796-gvzxz\" (UID: \"0d1c5a4a-d067-4ab8-b623-82a192c3bb07\") " pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679669 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0d1c5a4a-d067-4ab8-b623-82a192c3bb07-stats-auth\") pod \"router-default-5444994796-gvzxz\" (UID: \"0d1c5a4a-d067-4ab8-b623-82a192c3bb07\") " pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679685 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb1480f7-5616-46a7-a37f-479f33615b7f-serving-cert\") pod \"authentication-operator-69f744f599-pwcvq\" (UID: \"fb1480f7-5616-46a7-a37f-479f33615b7f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679716 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cbb22dd-2c0b-4be3-80b5-affe170bb787-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679742 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679783 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4cbb22dd-2c0b-4be3-80b5-affe170bb787-audit\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679811 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4v6d\" (UniqueName: \"kubernetes.io/projected/51a0574a-18f3-4fea-b3c9-ed345668f240-kube-api-access-s4v6d\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679840 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea1f6b10-9910-420e-96c7-cfd389d931c4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-65wmn\" (UID: \"ea1f6b10-9910-420e-96c7-cfd389d931c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679865 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679888 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a2a44a-7e65-45f7-bd20-b895f5f09c73-serving-cert\") pod \"controller-manager-879f6c89f-4r4ds\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679885 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679931 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s95w\" (UniqueName: \"kubernetes.io/projected/0d1c5a4a-d067-4ab8-b623-82a192c3bb07-kube-api-access-5s95w\") pod \"router-default-5444994796-gvzxz\" (UID: \"0d1c5a4a-d067-4ab8-b623-82a192c3bb07\") " pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679959 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc182930-d86c-46a4-b3fd-493ef396e20b-config\") pod \"machine-approver-56656f9798-7k2sb\" (UID: \"fc182930-d86c-46a4-b3fd-493ef396e20b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.679982 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4cbb22dd-2c0b-4be3-80b5-affe170bb787-node-pullsecrets\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680001 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9hcs\" (UniqueName: \"kubernetes.io/projected/ba69339a-1102-4a25-ae4e-a70b643e6ff1-kube-api-access-x9hcs\") pod \"machine-api-operator-5694c8668f-ltcmm\" (UID: \"ba69339a-1102-4a25-ae4e-a70b643e6ff1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680023 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39-available-featuregates\") pod \"openshift-config-operator-7777fb866f-skjzx\" (UID: \"f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680044 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680062 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39-serving-cert\") pod \"openshift-config-operator-7777fb866f-skjzx\" (UID: \"f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680082 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-config\") pod \"controller-manager-879f6c89f-4r4ds\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680100 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709308c5-9977-4e05-98f0-b745c298db67-config\") pod \"route-controller-manager-6576b87f9c-h2hn7\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680119 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9z6q\" (UniqueName: \"kubernetes.io/projected/fb1480f7-5616-46a7-a37f-479f33615b7f-kube-api-access-k9z6q\") pod \"authentication-operator-69f744f599-pwcvq\" (UID: \"fb1480f7-5616-46a7-a37f-479f33615b7f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680139 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/709308c5-9977-4e05-98f0-b745c298db67-serving-cert\") pod \"route-controller-manager-6576b87f9c-h2hn7\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680162 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4cbb22dd-2c0b-4be3-80b5-affe170bb787-image-import-ca\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680638 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4cbb22dd-2c0b-4be3-80b5-affe170bb787-etcd-client\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680663 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51a0574a-18f3-4fea-b3c9-ed345668f240-audit-policies\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680706 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dbf9ccb-15be-4b0f-bf67-8638a57bb848-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbckc\" (UID: \"3dbf9ccb-15be-4b0f-bf67-8638a57bb848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680732 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680775 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsm4w\" (UniqueName: \"kubernetes.io/projected/709308c5-9977-4e05-98f0-b745c298db67-kube-api-access-bsm4w\") pod \"route-controller-manager-6576b87f9c-h2hn7\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680798 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7mwk\" (UniqueName: \"kubernetes.io/projected/4cbb22dd-2c0b-4be3-80b5-affe170bb787-kube-api-access-j7mwk\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680820 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb1480f7-5616-46a7-a37f-479f33615b7f-service-ca-bundle\") pod \"authentication-operator-69f744f599-pwcvq\" (UID: \"fb1480f7-5616-46a7-a37f-479f33615b7f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680848 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680874 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-client-ca\") pod \"controller-manager-879f6c89f-4r4ds\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680893 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc182930-d86c-46a4-b3fd-493ef396e20b-machine-approver-tls\") pod \"machine-approver-56656f9798-7k2sb\" (UID: \"fc182930-d86c-46a4-b3fd-493ef396e20b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680919 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mddfg\" (UniqueName: \"kubernetes.io/projected/3dbf9ccb-15be-4b0f-bf67-8638a57bb848-kube-api-access-mddfg\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbckc\" (UID: \"3dbf9ccb-15be-4b0f-bf67-8638a57bb848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680949 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpjn9\" (UniqueName: \"kubernetes.io/projected/85a2a44a-7e65-45f7-bd20-b895f5f09c73-kube-api-access-hpjn9\") pod \"controller-manager-879f6c89f-4r4ds\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680969 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51a0574a-18f3-4fea-b3c9-ed345668f240-serving-cert\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.680996 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/51a0574a-18f3-4fea-b3c9-ed345668f240-encryption-config\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681015 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmq7m\" (UniqueName: \"kubernetes.io/projected/f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39-kube-api-access-cmq7m\") pod \"openshift-config-operator-7777fb866f-skjzx\" (UID: \"f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681037 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d1c5a4a-d067-4ab8-b623-82a192c3bb07-metrics-certs\") pod \"router-default-5444994796-gvzxz\" (UID: \"0d1c5a4a-d067-4ab8-b623-82a192c3bb07\") " pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681051 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4cbb22dd-2c0b-4be3-80b5-affe170bb787-audit-dir\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681068 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681092 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681109 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4cbb22dd-2c0b-4be3-80b5-affe170bb787-etcd-serving-ca\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681129 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4cbb22dd-2c0b-4be3-80b5-affe170bb787-encryption-config\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681150 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681167 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba69339a-1102-4a25-ae4e-a70b643e6ff1-config\") pod \"machine-api-operator-5694c8668f-ltcmm\" (UID: \"ba69339a-1102-4a25-ae4e-a70b643e6ff1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681188 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba69339a-1102-4a25-ae4e-a70b643e6ff1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ltcmm\" (UID: \"ba69339a-1102-4a25-ae4e-a70b643e6ff1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681211 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hd8c\" (UniqueName: \"kubernetes.io/projected/32f1325e-ec9d-4375-855d-970361b2ac03-kube-api-access-7hd8c\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681228 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/51a0574a-18f3-4fea-b3c9-ed345668f240-etcd-client\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681248 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d1c5a4a-d067-4ab8-b623-82a192c3bb07-service-ca-bundle\") pod \"router-default-5444994796-gvzxz\" (UID: \"0d1c5a4a-d067-4ab8-b623-82a192c3bb07\") " pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681267 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cbb22dd-2c0b-4be3-80b5-affe170bb787-serving-cert\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681284 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681287 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681945 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.682046 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cbb22dd-2c0b-4be3-80b5-affe170bb787-config\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.682173 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-f8rrr"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.682529 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681303 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cbb22dd-2c0b-4be3-80b5-affe170bb787-config\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.682894 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea1f6b10-9910-420e-96c7-cfd389d931c4-config\") pod \"kube-apiserver-operator-766d6c64bb-65wmn\" (UID: \"ea1f6b10-9910-420e-96c7-cfd389d931c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.682957 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/51a0574a-18f3-4fea-b3c9-ed345668f240-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.683638 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc182930-d86c-46a4-b3fd-493ef396e20b-auth-proxy-config\") pod \"machine-approver-56656f9798-7k2sb\" (UID: \"fc182930-d86c-46a4-b3fd-493ef396e20b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.683839 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.683947 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-client-ca\") pod \"controller-manager-879f6c89f-4r4ds\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.684383 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51a0574a-18f3-4fea-b3c9-ed345668f240-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.684521 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-f8rrr" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.684850 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc182930-d86c-46a4-b3fd-493ef396e20b-config\") pod \"machine-approver-56656f9798-7k2sb\" (UID: \"fc182930-d86c-46a4-b3fd-493ef396e20b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.684921 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4cbb22dd-2c0b-4be3-80b5-affe170bb787-node-pullsecrets\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.684995 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/51a0574a-18f3-4fea-b3c9-ed345668f240-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.685322 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39-available-featuregates\") pod \"openshift-config-operator-7777fb866f-skjzx\" (UID: \"f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.685418 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/709308c5-9977-4e05-98f0-b745c298db67-client-ca\") pod \"route-controller-manager-6576b87f9c-h2hn7\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.686439 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ba69339a-1102-4a25-ae4e-a70b643e6ff1-images\") pod \"machine-api-operator-5694c8668f-ltcmm\" (UID: \"ba69339a-1102-4a25-ae4e-a70b643e6ff1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.686443 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cbb22dd-2c0b-4be3-80b5-affe170bb787-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.686555 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.686657 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4r4ds\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.687590 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4cbb22dd-2c0b-4be3-80b5-affe170bb787-audit\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.687775 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb1480f7-5616-46a7-a37f-479f33615b7f-config\") pod \"authentication-operator-69f744f599-pwcvq\" (UID: \"fb1480f7-5616-46a7-a37f-479f33615b7f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.688026 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb1480f7-5616-46a7-a37f-479f33615b7f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pwcvq\" (UID: \"fb1480f7-5616-46a7-a37f-479f33615b7f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.689964 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.691694 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.690464 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709308c5-9977-4e05-98f0-b745c298db67-config\") pod \"route-controller-manager-6576b87f9c-h2hn7\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.690685 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4cbb22dd-2c0b-4be3-80b5-affe170bb787-image-import-ca\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.691170 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-audit-policies\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.691238 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/709308c5-9977-4e05-98f0-b745c298db67-serving-cert\") pod \"route-controller-manager-6576b87f9c-h2hn7\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.691231 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-config\") pod \"controller-manager-879f6c89f-4r4ds\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.690267 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a2a44a-7e65-45f7-bd20-b895f5f09c73-serving-cert\") pod \"controller-manager-879f6c89f-4r4ds\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.691694 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51a0574a-18f3-4fea-b3c9-ed345668f240-audit-policies\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.692008 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n9rnn"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.692314 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.692823 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.691434 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb1480f7-5616-46a7-a37f-479f33615b7f-service-ca-bundle\") pod \"authentication-operator-69f744f599-pwcvq\" (UID: \"fb1480f7-5616-46a7-a37f-479f33615b7f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.694095 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb1480f7-5616-46a7-a37f-479f33615b7f-serving-cert\") pod \"authentication-operator-69f744f599-pwcvq\" (UID: \"fb1480f7-5616-46a7-a37f-479f33615b7f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681106 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32f1325e-ec9d-4375-855d-970361b2ac03-audit-dir\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.681192 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51a0574a-18f3-4fea-b3c9-ed345668f240-audit-dir\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.694893 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.695175 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.695458 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.695677 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.696331 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4cbb22dd-2c0b-4be3-80b5-affe170bb787-etcd-serving-ca\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.700557 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.702306 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.704261 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-skjzx"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.706867 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.708077 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.708881 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.709261 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.710845 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l7559"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.711776 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6q96w"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.713696 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.713845 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.714041 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dst6k"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.715015 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pfwg8"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.716079 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.719234 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pwcvq"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.719296 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.721034 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.722045 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bb5s2"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.723120 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2stcb"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.743919 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.744097 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/51a0574a-18f3-4fea-b3c9-ed345668f240-encryption-config\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.744115 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39-serving-cert\") pod \"openshift-config-operator-7777fb866f-skjzx\" (UID: \"f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.744124 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/51a0574a-18f3-4fea-b3c9-ed345668f240-etcd-client\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.744671 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cbb22dd-2c0b-4be3-80b5-affe170bb787-serving-cert\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.744706 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.744847 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.744971 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.744990 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-r4srv"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.745046 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.745149 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba69339a-1102-4a25-ae4e-a70b643e6ff1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ltcmm\" (UID: \"ba69339a-1102-4a25-ae4e-a70b643e6ff1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.745279 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc182930-d86c-46a4-b3fd-493ef396e20b-machine-approver-tls\") pod \"machine-approver-56656f9798-7k2sb\" (UID: \"fc182930-d86c-46a4-b3fd-493ef396e20b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.745421 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.745421 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.745720 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4cbb22dd-2c0b-4be3-80b5-affe170bb787-encryption-config\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.745838 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba69339a-1102-4a25-ae4e-a70b643e6ff1-config\") pod \"machine-api-operator-5694c8668f-ltcmm\" (UID: \"ba69339a-1102-4a25-ae4e-a70b643e6ff1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.745984 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.746022 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4cbb22dd-2c0b-4be3-80b5-affe170bb787-etcd-client\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.746298 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51a0574a-18f3-4fea-b3c9-ed345668f240-serving-cert\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.747487 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.748303 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.749338 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6zvvc"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.750339 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.751371 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.752396 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fsrlb"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.753958 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fzmbh"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.754895 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5st7s"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.756136 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5st7s" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.757538 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.757670 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-p55ct"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.760561 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-p55ct" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.770002 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n9rnn"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.773811 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-czr5t"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.776388 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.776997 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6q96w"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.778598 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.778832 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.779931 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.781034 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-f8rrr"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.782150 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-p55ct"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.783527 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5st7s"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.783707 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d1c5a4a-d067-4ab8-b623-82a192c3bb07-metrics-certs\") pod \"router-default-5444994796-gvzxz\" (UID: \"0d1c5a4a-d067-4ab8-b623-82a192c3bb07\") " pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.783782 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d1c5a4a-d067-4ab8-b623-82a192c3bb07-service-ca-bundle\") pod \"router-default-5444994796-gvzxz\" (UID: \"0d1c5a4a-d067-4ab8-b623-82a192c3bb07\") " pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.783863 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea1f6b10-9910-420e-96c7-cfd389d931c4-config\") pod \"kube-apiserver-operator-766d6c64bb-65wmn\" (UID: \"ea1f6b10-9910-420e-96c7-cfd389d931c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.783973 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea1f6b10-9910-420e-96c7-cfd389d931c4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-65wmn\" (UID: \"ea1f6b10-9910-420e-96c7-cfd389d931c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.784011 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dbf9ccb-15be-4b0f-bf67-8638a57bb848-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbckc\" (UID: \"3dbf9ccb-15be-4b0f-bf67-8638a57bb848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.784041 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0d1c5a4a-d067-4ab8-b623-82a192c3bb07-default-certificate\") pod \"router-default-5444994796-gvzxz\" (UID: \"0d1c5a4a-d067-4ab8-b623-82a192c3bb07\") " pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.784068 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0d1c5a4a-d067-4ab8-b623-82a192c3bb07-stats-auth\") pod \"router-default-5444994796-gvzxz\" (UID: \"0d1c5a4a-d067-4ab8-b623-82a192c3bb07\") " pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.784107 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea1f6b10-9910-420e-96c7-cfd389d931c4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-65wmn\" (UID: \"ea1f6b10-9910-420e-96c7-cfd389d931c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.784184 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s95w\" (UniqueName: \"kubernetes.io/projected/0d1c5a4a-d067-4ab8-b623-82a192c3bb07-kube-api-access-5s95w\") pod \"router-default-5444994796-gvzxz\" (UID: \"0d1c5a4a-d067-4ab8-b623-82a192c3bb07\") " pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.784271 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dbf9ccb-15be-4b0f-bf67-8638a57bb848-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbckc\" (UID: \"3dbf9ccb-15be-4b0f-bf67-8638a57bb848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.784331 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mddfg\" (UniqueName: \"kubernetes.io/projected/3dbf9ccb-15be-4b0f-bf67-8638a57bb848-kube-api-access-mddfg\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbckc\" (UID: \"3dbf9ccb-15be-4b0f-bf67-8638a57bb848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.784718 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-md2wk"] Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.785348 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dbf9ccb-15be-4b0f-bf67-8638a57bb848-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbckc\" (UID: \"3dbf9ccb-15be-4b0f-bf67-8638a57bb848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.785538 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-md2wk" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.787367 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dbf9ccb-15be-4b0f-bf67-8638a57bb848-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbckc\" (UID: \"3dbf9ccb-15be-4b0f-bf67-8638a57bb848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.791665 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea1f6b10-9910-420e-96c7-cfd389d931c4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-65wmn\" (UID: \"ea1f6b10-9910-420e-96c7-cfd389d931c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.799703 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.805067 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea1f6b10-9910-420e-96c7-cfd389d931c4-config\") pod \"kube-apiserver-operator-766d6c64bb-65wmn\" (UID: \"ea1f6b10-9910-420e-96c7-cfd389d931c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.818424 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.839552 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.859055 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.878626 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.886058 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.890020 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0d1c5a4a-d067-4ab8-b623-82a192c3bb07-default-certificate\") pod \"router-default-5444994796-gvzxz\" (UID: \"0d1c5a4a-d067-4ab8-b623-82a192c3bb07\") " pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.899554 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.908165 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0d1c5a4a-d067-4ab8-b623-82a192c3bb07-stats-auth\") pod \"router-default-5444994796-gvzxz\" (UID: \"0d1c5a4a-d067-4ab8-b623-82a192c3bb07\") " pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.918868 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.927674 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d1c5a4a-d067-4ab8-b623-82a192c3bb07-metrics-certs\") pod \"router-default-5444994796-gvzxz\" (UID: \"0d1c5a4a-d067-4ab8-b623-82a192c3bb07\") " pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.938986 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.944900 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d1c5a4a-d067-4ab8-b623-82a192c3bb07-service-ca-bundle\") pod \"router-default-5444994796-gvzxz\" (UID: \"0d1c5a4a-d067-4ab8-b623-82a192c3bb07\") " pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:13 crc kubenswrapper[4784]: I0123 06:22:13.958871 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.018953 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.038462 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.058861 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.078674 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.100838 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.118092 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.139021 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.158999 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.179333 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.199794 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.231108 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.238453 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.258570 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.277765 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.299688 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.318728 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.339153 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.358621 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.378542 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.403116 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.419346 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.439776 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.458700 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.478422 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.499479 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.518435 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.538618 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.558800 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.579787 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.598339 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.617641 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.638588 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.658601 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.676495 4784 request.go:700] Waited for 1.01268335s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&limit=500&resourceVersion=0 Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.681537 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.698384 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.719272 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.739499 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.766597 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.778306 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.799277 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.819572 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.838817 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.858672 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.879225 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.898936 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.919189 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.939469 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.986391 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4v6d\" (UniqueName: \"kubernetes.io/projected/51a0574a-18f3-4fea-b3c9-ed345668f240-kube-api-access-s4v6d\") pod \"apiserver-7bbb656c7d-qgmbq\" (UID: \"51a0574a-18f3-4fea-b3c9-ed345668f240\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:14 crc kubenswrapper[4784]: I0123 06:22:14.999321 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.006464 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtftg\" (UniqueName: \"kubernetes.io/projected/fc182930-d86c-46a4-b3fd-493ef396e20b-kube-api-access-rtftg\") pod \"machine-approver-56656f9798-7k2sb\" (UID: \"fc182930-d86c-46a4-b3fd-493ef396e20b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.018439 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.038027 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.059394 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.078429 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.118975 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.125083 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9z6q\" (UniqueName: \"kubernetes.io/projected/fb1480f7-5616-46a7-a37f-479f33615b7f-kube-api-access-k9z6q\") pod \"authentication-operator-69f744f599-pwcvq\" (UID: \"fb1480f7-5616-46a7-a37f-479f33615b7f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.138448 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.141860 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.178113 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.181332 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.181920 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9hcs\" (UniqueName: \"kubernetes.io/projected/ba69339a-1102-4a25-ae4e-a70b643e6ff1-kube-api-access-x9hcs\") pod \"machine-api-operator-5694c8668f-ltcmm\" (UID: \"ba69339a-1102-4a25-ae4e-a70b643e6ff1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.214391 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsm4w\" (UniqueName: \"kubernetes.io/projected/709308c5-9977-4e05-98f0-b745c298db67-kube-api-access-bsm4w\") pod \"route-controller-manager-6576b87f9c-h2hn7\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.233876 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpjn9\" (UniqueName: \"kubernetes.io/projected/85a2a44a-7e65-45f7-bd20-b895f5f09c73-kube-api-access-hpjn9\") pod \"controller-manager-879f6c89f-4r4ds\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.245939 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.252378 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7mwk\" (UniqueName: \"kubernetes.io/projected/4cbb22dd-2c0b-4be3-80b5-affe170bb787-kube-api-access-j7mwk\") pod \"apiserver-76f77b778f-hhgpf\" (UID: \"4cbb22dd-2c0b-4be3-80b5-affe170bb787\") " pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.276651 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hd8c\" (UniqueName: \"kubernetes.io/projected/32f1325e-ec9d-4375-855d-970361b2ac03-kube-api-access-7hd8c\") pod \"oauth-openshift-558db77b4-4wkg9\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.293990 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmq7m\" (UniqueName: \"kubernetes.io/projected/f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39-kube-api-access-cmq7m\") pod \"openshift-config-operator-7777fb866f-skjzx\" (UID: \"f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.298853 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.307057 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.319031 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.347544 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.358656 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.378793 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.381557 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.392944 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.398472 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.412620 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.418850 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.420039 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.439502 4784 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.458389 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.462468 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.478958 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.499455 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.519329 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.538576 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.558807 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.696858 4784 request.go:700] Waited for 1.910830131s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.959765 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.965952 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.965958 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.966351 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.966726 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.967851 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.981261 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s95w\" (UniqueName: \"kubernetes.io/projected/0d1c5a4a-d067-4ab8-b623-82a192c3bb07-kube-api-access-5s95w\") pod \"router-default-5444994796-gvzxz\" (UID: \"0d1c5a4a-d067-4ab8-b623-82a192c3bb07\") " pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.986109 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mddfg\" (UniqueName: \"kubernetes.io/projected/3dbf9ccb-15be-4b0f-bf67-8638a57bb848-kube-api-access-mddfg\") pod \"openshift-controller-manager-operator-756b6f6bc6-jbckc\" (UID: \"3dbf9ccb-15be-4b0f-bf67-8638a57bb848\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc" Jan 23 06:22:15 crc kubenswrapper[4784]: I0123 06:22:15.997594 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea1f6b10-9910-420e-96c7-cfd389d931c4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-65wmn\" (UID: \"ea1f6b10-9910-420e-96c7-cfd389d931c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.023884 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcb815b1-b9f3-4af3-85a7-cf4162be8f7b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-kv2z9\" (UID: \"bcb815b1-b9f3-4af3-85a7-cf4162be8f7b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.023964 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.024719 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-bound-sa-token\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.024964 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdklm\" (UniqueName: \"kubernetes.io/projected/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-kube-api-access-fdklm\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.025523 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf4fdcc3-7a45-404d-ac8a-86700c1b401f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzjwm\" (UID: \"cf4fdcc3-7a45-404d-ac8a-86700c1b401f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.025631 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-serving-cert\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.025685 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcb815b1-b9f3-4af3-85a7-cf4162be8f7b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-kv2z9\" (UID: \"bcb815b1-b9f3-4af3-85a7-cf4162be8f7b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.025901 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.026060 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf4fdcc3-7a45-404d-ac8a-86700c1b401f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzjwm\" (UID: \"cf4fdcc3-7a45-404d-ac8a-86700c1b401f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.026119 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2rnk\" (UniqueName: \"kubernetes.io/projected/bcb815b1-b9f3-4af3-85a7-cf4162be8f7b-kube-api-access-l2rnk\") pod \"cluster-image-registry-operator-dc59b4c8b-kv2z9\" (UID: \"bcb815b1-b9f3-4af3-85a7-cf4162be8f7b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.026151 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-registry-certificates\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.026200 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a7c339b-1a18-4a89-ad41-889f28df7304-serving-cert\") pod \"console-operator-58897d9998-xqdqx\" (UID: \"0a7c339b-1a18-4a89-ad41-889f28df7304\") " pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.026234 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdzdp\" (UniqueName: \"kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-kube-api-access-bdzdp\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.026259 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-config\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.026311 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gswbh\" (UniqueName: \"kubernetes.io/projected/b03d7aa3-b8a0-4725-b16d-908e50b963e4-kube-api-access-gswbh\") pod \"downloads-7954f5f757-bb5s2\" (UID: \"b03d7aa3-b8a0-4725-b16d-908e50b963e4\") " pod="openshift-console/downloads-7954f5f757-bb5s2" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.026339 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjdnz\" (UniqueName: \"kubernetes.io/projected/fffb502e-8e9d-4eaa-9132-e166d4ad1386-kube-api-access-xjdnz\") pod \"openshift-apiserver-operator-796bbdcf4f-n4fwv\" (UID: \"fffb502e-8e9d-4eaa-9132-e166d4ad1386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.026370 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/52fb80ca-3a92-42b7-a9b6-7de2cb478603-metrics-tls\") pod \"dns-operator-744455d44c-l7559\" (UID: \"52fb80ca-3a92-42b7-a9b6-7de2cb478603\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7559" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.026394 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-etcd-client\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.026463 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-trusted-ca\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.026525 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcb815b1-b9f3-4af3-85a7-cf4162be8f7b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-kv2z9\" (UID: \"bcb815b1-b9f3-4af3-85a7-cf4162be8f7b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.026551 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a7c339b-1a18-4a89-ad41-889f28df7304-trusted-ca\") pod \"console-operator-58897d9998-xqdqx\" (UID: \"0a7c339b-1a18-4a89-ad41-889f28df7304\") " pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.028835 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-registry-tls\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.028924 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fffb502e-8e9d-4eaa-9132-e166d4ad1386-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n4fwv\" (UID: \"fffb502e-8e9d-4eaa-9132-e166d4ad1386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.028949 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-etcd-ca\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.028974 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh8fh\" (UniqueName: \"kubernetes.io/projected/0a7c339b-1a18-4a89-ad41-889f28df7304-kube-api-access-gh8fh\") pod \"console-operator-58897d9998-xqdqx\" (UID: \"0a7c339b-1a18-4a89-ad41-889f28df7304\") " pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.030590 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.030643 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf4fdcc3-7a45-404d-ac8a-86700c1b401f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzjwm\" (UID: \"cf4fdcc3-7a45-404d-ac8a-86700c1b401f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.034774 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdbbs\" (UniqueName: \"kubernetes.io/projected/52fb80ca-3a92-42b7-a9b6-7de2cb478603-kube-api-access-gdbbs\") pod \"dns-operator-744455d44c-l7559\" (UID: \"52fb80ca-3a92-42b7-a9b6-7de2cb478603\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7559" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.034852 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7c339b-1a18-4a89-ad41-889f28df7304-config\") pod \"console-operator-58897d9998-xqdqx\" (UID: \"0a7c339b-1a18-4a89-ad41-889f28df7304\") " pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:16 crc kubenswrapper[4784]: E0123 06:22:16.034881 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:16.534860401 +0000 UTC m=+139.767368375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.035823 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fffb502e-8e9d-4eaa-9132-e166d4ad1386-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n4fwv\" (UID: \"fffb502e-8e9d-4eaa-9132-e166d4ad1386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.036368 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-etcd-service-ca\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.129372 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" event={"ID":"fc182930-d86c-46a4-b3fd-493ef396e20b","Type":"ContainerStarted","Data":"2c36b4b7a6a0230ec8633794ce2312536a061b733c000ba3dc7e7f07bf65dd23"} Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.129829 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" event={"ID":"fc182930-d86c-46a4-b3fd-493ef396e20b","Type":"ContainerStarted","Data":"fe9d5f1cda3aaf62a378cafc9d93366daf524c180d8050d4f532e1114093b07d"} Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.137671 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.137940 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-serving-cert\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.137981 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxnvh\" (UniqueName: \"kubernetes.io/projected/86f3bcf7-f2f4-4ed0-aae2-be5f61657fac-kube-api-access-pxnvh\") pod \"packageserver-d55dfcdfc-sk4wc\" (UID: \"86f3bcf7-f2f4-4ed0-aae2-be5f61657fac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:16 crc kubenswrapper[4784]: E0123 06:22:16.138009 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:16.637975919 +0000 UTC m=+139.870483943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138090 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0db28f19-b83b-46a0-befb-1720ccd656bb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vpgh7\" (UID: \"0db28f19-b83b-46a0-befb-1720ccd656bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138157 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e03c05ee-79c1-492f-bc57-f4241be21623-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-h8trm\" (UID: \"e03c05ee-79c1-492f-bc57-f4241be21623\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138186 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97fee47e-af30-44f4-b7ce-c7277e65dc35-config\") pod \"service-ca-operator-777779d784-r4srv\" (UID: \"97fee47e-af30-44f4-b7ce-c7277e65dc35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4srv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138216 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/409ee6bf-e36a-4a14-9223-32c726962eab-socket-dir\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138241 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/86f3bcf7-f2f4-4ed0-aae2-be5f61657fac-tmpfs\") pod \"packageserver-d55dfcdfc-sk4wc\" (UID: \"86f3bcf7-f2f4-4ed0-aae2-be5f61657fac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138281 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/409ee6bf-e36a-4a14-9223-32c726962eab-plugins-dir\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138305 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mcrw\" (UniqueName: \"kubernetes.io/projected/e03c05ee-79c1-492f-bc57-f4241be21623-kube-api-access-6mcrw\") pod \"kube-storage-version-migrator-operator-b67b599dd-h8trm\" (UID: \"e03c05ee-79c1-492f-bc57-f4241be21623\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138324 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f6b7620-7eef-4758-8e27-44453c3925f9-srv-cert\") pod \"catalog-operator-68c6474976-mtnbf\" (UID: \"4f6b7620-7eef-4758-8e27-44453c3925f9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138383 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/120c77d9-d427-4c8c-87fb-4443fe6ee918-signing-key\") pod \"service-ca-9c57cc56f-6zvvc\" (UID: \"120c77d9-d427-4c8c-87fb-4443fe6ee918\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zvvc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138417 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v55t6\" (UniqueName: \"kubernetes.io/projected/34f058be-8b3f-4835-aab4-ab7df5f787b0-kube-api-access-v55t6\") pod \"machine-config-operator-74547568cd-w9xbs\" (UID: \"34f058be-8b3f-4835-aab4-ab7df5f787b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138441 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d54c2ab7-ba8c-4e44-b4b5-cdb617753316-metrics-tls\") pod \"dns-default-5st7s\" (UID: \"d54c2ab7-ba8c-4e44-b4b5-cdb617753316\") " pod="openshift-dns/dns-default-5st7s" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138471 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk6c7\" (UniqueName: \"kubernetes.io/projected/4f6b7620-7eef-4758-8e27-44453c3925f9-kube-api-access-pk6c7\") pod \"catalog-operator-68c6474976-mtnbf\" (UID: \"4f6b7620-7eef-4758-8e27-44453c3925f9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138518 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c8a935-b603-40f3-8051-c705e23c20f3-console-serving-cert\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138546 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-trusted-ca-bundle\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138569 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs86p\" (UniqueName: \"kubernetes.io/projected/d54c2ab7-ba8c-4e44-b4b5-cdb617753316-kube-api-access-rs86p\") pod \"dns-default-5st7s\" (UID: \"d54c2ab7-ba8c-4e44-b4b5-cdb617753316\") " pod="openshift-dns/dns-default-5st7s" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138622 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf4fdcc3-7a45-404d-ac8a-86700c1b401f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzjwm\" (UID: \"cf4fdcc3-7a45-404d-ac8a-86700c1b401f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138654 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dc93f303-432c-4487-a225-f0af2fa5bd49-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n9rnn\" (UID: \"dc93f303-432c-4487-a225-f0af2fa5bd49\") " pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138700 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nmbk\" (UniqueName: \"kubernetes.io/projected/ee95cb1e-738d-4e44-bcd9-978114c4e440-kube-api-access-4nmbk\") pod \"machine-config-controller-84d6567774-wn4qk\" (UID: \"ee95cb1e-738d-4e44-bcd9-978114c4e440\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138726 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhr8h\" (UniqueName: \"kubernetes.io/projected/0bdea249-8d22-4c90-81ed-0fd52338641e-kube-api-access-hhr8h\") pod \"ingress-operator-5b745b69d9-8xhsh\" (UID: \"0bdea249-8d22-4c90-81ed-0fd52338641e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138770 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a7c339b-1a18-4a89-ad41-889f28df7304-serving-cert\") pod \"console-operator-58897d9998-xqdqx\" (UID: \"0a7c339b-1a18-4a89-ad41-889f28df7304\") " pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138799 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-config\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138831 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0bdea249-8d22-4c90-81ed-0fd52338641e-metrics-tls\") pod \"ingress-operator-5b745b69d9-8xhsh\" (UID: \"0bdea249-8d22-4c90-81ed-0fd52338641e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138862 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfm8p\" (UniqueName: \"kubernetes.io/projected/75a8ac2c-f286-499e-9faa-03f25bc7f579-kube-api-access-xfm8p\") pod \"control-plane-machine-set-operator-78cbb6b69f-czr5t\" (UID: \"75a8ac2c-f286-499e-9faa-03f25bc7f579\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-czr5t" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138896 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-etcd-client\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138920 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcb815b1-b9f3-4af3-85a7-cf4162be8f7b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-kv2z9\" (UID: \"bcb815b1-b9f3-4af3-85a7-cf4162be8f7b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138942 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a7c339b-1a18-4a89-ad41-889f28df7304-trusted-ca\") pod \"console-operator-58897d9998-xqdqx\" (UID: \"0a7c339b-1a18-4a89-ad41-889f28df7304\") " pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.138979 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97fee47e-af30-44f4-b7ce-c7277e65dc35-serving-cert\") pod \"service-ca-operator-777779d784-r4srv\" (UID: \"97fee47e-af30-44f4-b7ce-c7277e65dc35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4srv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139007 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x66ph\" (UniqueName: \"kubernetes.io/projected/120c77d9-d427-4c8c-87fb-4443fe6ee918-kube-api-access-x66ph\") pod \"service-ca-9c57cc56f-6zvvc\" (UID: \"120c77d9-d427-4c8c-87fb-4443fe6ee918\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zvvc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139030 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-registry-tls\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139060 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd230e8a-2ec3-40e3-b964-66279c61bdfb-config-volume\") pod \"collect-profiles-29485815-bhpt6\" (UID: \"cd230e8a-2ec3-40e3-b964-66279c61bdfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139092 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxwgp\" (UniqueName: \"kubernetes.io/projected/9e1d9fb8-3442-41c5-830e-7264ef907208-kube-api-access-cxwgp\") pod \"machine-config-server-md2wk\" (UID: \"9e1d9fb8-3442-41c5-830e-7264ef907208\") " pod="openshift-machine-config-operator/machine-config-server-md2wk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139278 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/409ee6bf-e36a-4a14-9223-32c726962eab-csi-data-dir\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139304 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0db28f19-b83b-46a0-befb-1720ccd656bb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vpgh7\" (UID: \"0db28f19-b83b-46a0-befb-1720ccd656bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139336 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee95cb1e-738d-4e44-bcd9-978114c4e440-proxy-tls\") pod \"machine-config-controller-84d6567774-wn4qk\" (UID: \"ee95cb1e-738d-4e44-bcd9-978114c4e440\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139362 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86f3bcf7-f2f4-4ed0-aae2-be5f61657fac-apiservice-cert\") pod \"packageserver-d55dfcdfc-sk4wc\" (UID: \"86f3bcf7-f2f4-4ed0-aae2-be5f61657fac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139414 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fffb502e-8e9d-4eaa-9132-e166d4ad1386-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n4fwv\" (UID: \"fffb502e-8e9d-4eaa-9132-e166d4ad1386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139435 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6c8a935-b603-40f3-8051-c705e23c20f3-console-oauth-config\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139458 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-etcd-service-ca\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139475 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86f3bcf7-f2f4-4ed0-aae2-be5f61657fac-webhook-cert\") pod \"packageserver-d55dfcdfc-sk4wc\" (UID: \"86f3bcf7-f2f4-4ed0-aae2-be5f61657fac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139511 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-bound-sa-token\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139532 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee95cb1e-738d-4e44-bcd9-978114c4e440-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wn4qk\" (UID: \"ee95cb1e-738d-4e44-bcd9-978114c4e440\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139548 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtkc8\" (UniqueName: \"kubernetes.io/projected/b6c8a935-b603-40f3-8051-c705e23c20f3-kube-api-access-rtkc8\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139567 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34f058be-8b3f-4835-aab4-ab7df5f787b0-images\") pod \"machine-config-operator-74547568cd-w9xbs\" (UID: \"34f058be-8b3f-4835-aab4-ab7df5f787b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139586 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljpp4\" (UniqueName: \"kubernetes.io/projected/dc93f303-432c-4487-a225-f0af2fa5bd49-kube-api-access-ljpp4\") pod \"marketplace-operator-79b997595-n9rnn\" (UID: \"dc93f303-432c-4487-a225-f0af2fa5bd49\") " pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139604 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/277df242-6850-47b2-af69-2e33cd07657b-srv-cert\") pod \"olm-operator-6b444d44fb-sgngm\" (UID: \"277df242-6850-47b2-af69-2e33cd07657b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139627 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3012e555-7659-4858-aa51-cb6ae6fa6a36-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-x5265\" (UID: \"3012e555-7659-4858-aa51-cb6ae6fa6a36\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139647 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0bdea249-8d22-4c90-81ed-0fd52338641e-trusted-ca\") pod \"ingress-operator-5b745b69d9-8xhsh\" (UID: \"0bdea249-8d22-4c90-81ed-0fd52338641e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139665 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc93f303-432c-4487-a225-f0af2fa5bd49-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n9rnn\" (UID: \"dc93f303-432c-4487-a225-f0af2fa5bd49\") " pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139697 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf4fdcc3-7a45-404d-ac8a-86700c1b401f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzjwm\" (UID: \"cf4fdcc3-7a45-404d-ac8a-86700c1b401f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139732 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcb815b1-b9f3-4af3-85a7-cf4162be8f7b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-kv2z9\" (UID: \"bcb815b1-b9f3-4af3-85a7-cf4162be8f7b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139790 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34f058be-8b3f-4835-aab4-ab7df5f787b0-proxy-tls\") pod \"machine-config-operator-74547568cd-w9xbs\" (UID: \"34f058be-8b3f-4835-aab4-ab7df5f787b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139814 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db28f19-b83b-46a0-befb-1720ccd656bb-config\") pod \"kube-controller-manager-operator-78b949d7b-vpgh7\" (UID: \"0db28f19-b83b-46a0-befb-1720ccd656bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139844 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34f058be-8b3f-4835-aab4-ab7df5f787b0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w9xbs\" (UID: \"34f058be-8b3f-4835-aab4-ab7df5f787b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139872 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp8vx\" (UniqueName: \"kubernetes.io/projected/409ee6bf-e36a-4a14-9223-32c726962eab-kube-api-access-wp8vx\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139896 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01651938-c65b-4314-a29c-02aad47fc6be-cert\") pod \"ingress-canary-p55ct\" (UID: \"01651938-c65b-4314-a29c-02aad47fc6be\") " pod="openshift-ingress-canary/ingress-canary-p55ct" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139972 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.139998 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-service-ca\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.140021 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/75a8ac2c-f286-499e-9faa-03f25bc7f579-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-czr5t\" (UID: \"75a8ac2c-f286-499e-9faa-03f25bc7f579\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-czr5t" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.140069 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-oauth-serving-cert\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.140107 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd2caef4-07a6-420b-80e0-c2f26b044bee-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f8rrr\" (UID: \"bd2caef4-07a6-420b-80e0-c2f26b044bee\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f8rrr" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.140136 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9e1d9fb8-3442-41c5-830e-7264ef907208-node-bootstrap-token\") pod \"machine-config-server-md2wk\" (UID: \"9e1d9fb8-3442-41c5-830e-7264ef907208\") " pod="openshift-machine-config-operator/machine-config-server-md2wk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.140160 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-console-config\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.140184 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s52pn\" (UniqueName: \"kubernetes.io/projected/01651938-c65b-4314-a29c-02aad47fc6be-kube-api-access-s52pn\") pod \"ingress-canary-p55ct\" (UID: \"01651938-c65b-4314-a29c-02aad47fc6be\") " pod="openshift-ingress-canary/ingress-canary-p55ct" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.140228 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4f6b7620-7eef-4758-8e27-44453c3925f9-profile-collector-cert\") pod \"catalog-operator-68c6474976-mtnbf\" (UID: \"4f6b7620-7eef-4758-8e27-44453c3925f9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.143243 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-etcd-service-ca\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.143441 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2rnk\" (UniqueName: \"kubernetes.io/projected/bcb815b1-b9f3-4af3-85a7-cf4162be8f7b-kube-api-access-l2rnk\") pod \"cluster-image-registry-operator-dc59b4c8b-kv2z9\" (UID: \"bcb815b1-b9f3-4af3-85a7-cf4162be8f7b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.143524 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-registry-certificates\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.143701 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdzdp\" (UniqueName: \"kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-kube-api-access-bdzdp\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.143782 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gswbh\" (UniqueName: \"kubernetes.io/projected/b03d7aa3-b8a0-4725-b16d-908e50b963e4-kube-api-access-gswbh\") pod \"downloads-7954f5f757-bb5s2\" (UID: \"b03d7aa3-b8a0-4725-b16d-908e50b963e4\") " pod="openshift-console/downloads-7954f5f757-bb5s2" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.143793 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0a7c339b-1a18-4a89-ad41-889f28df7304-trusted-ca\") pod \"console-operator-58897d9998-xqdqx\" (UID: \"0a7c339b-1a18-4a89-ad41-889f28df7304\") " pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.143807 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjdnz\" (UniqueName: \"kubernetes.io/projected/fffb502e-8e9d-4eaa-9132-e166d4ad1386-kube-api-access-xjdnz\") pod \"openshift-apiserver-operator-796bbdcf4f-n4fwv\" (UID: \"fffb502e-8e9d-4eaa-9132-e166d4ad1386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.143861 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-config\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.143909 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/52fb80ca-3a92-42b7-a9b6-7de2cb478603-metrics-tls\") pod \"dns-operator-744455d44c-l7559\" (UID: \"52fb80ca-3a92-42b7-a9b6-7de2cb478603\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7559" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.143940 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/277df242-6850-47b2-af69-2e33cd07657b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sgngm\" (UID: \"277df242-6850-47b2-af69-2e33cd07657b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.143999 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-trusted-ca\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.145057 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7c1f7c1-2f12-473f-b782-750fa84c8b03-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dst6k\" (UID: \"c7c1f7c1-2f12-473f-b782-750fa84c8b03\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dst6k" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.145100 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvp4w\" (UniqueName: \"kubernetes.io/projected/cd230e8a-2ec3-40e3-b964-66279c61bdfb-kube-api-access-dvp4w\") pod \"collect-profiles-29485815-bhpt6\" (UID: \"cd230e8a-2ec3-40e3-b964-66279c61bdfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.145136 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/409ee6bf-e36a-4a14-9223-32c726962eab-registration-dir\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.145284 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/120c77d9-d427-4c8c-87fb-4443fe6ee918-signing-cabundle\") pod \"service-ca-9c57cc56f-6zvvc\" (UID: \"120c77d9-d427-4c8c-87fb-4443fe6ee918\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zvvc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.145317 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e03c05ee-79c1-492f-bc57-f4241be21623-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-h8trm\" (UID: \"e03c05ee-79c1-492f-bc57-f4241be21623\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.145352 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fffb502e-8e9d-4eaa-9132-e166d4ad1386-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n4fwv\" (UID: \"fffb502e-8e9d-4eaa-9132-e166d4ad1386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.145423 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-etcd-ca\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.145523 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh8fh\" (UniqueName: \"kubernetes.io/projected/0a7c339b-1a18-4a89-ad41-889f28df7304-kube-api-access-gh8fh\") pod \"console-operator-58897d9998-xqdqx\" (UID: \"0a7c339b-1a18-4a89-ad41-889f28df7304\") " pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.145557 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp9vq\" (UniqueName: \"kubernetes.io/projected/97fee47e-af30-44f4-b7ce-c7277e65dc35-kube-api-access-gp9vq\") pod \"service-ca-operator-777779d784-r4srv\" (UID: \"97fee47e-af30-44f4-b7ce-c7277e65dc35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4srv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.145589 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vbdd\" (UniqueName: \"kubernetes.io/projected/bd2caef4-07a6-420b-80e0-c2f26b044bee-kube-api-access-7vbdd\") pod \"multus-admission-controller-857f4d67dd-f8rrr\" (UID: \"bd2caef4-07a6-420b-80e0-c2f26b044bee\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f8rrr" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.145622 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2dqx\" (UniqueName: \"kubernetes.io/projected/3012e555-7659-4858-aa51-cb6ae6fa6a36-kube-api-access-h2dqx\") pod \"package-server-manager-789f6589d5-x5265\" (UID: \"3012e555-7659-4858-aa51-cb6ae6fa6a36\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.145662 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.145689 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf4fdcc3-7a45-404d-ac8a-86700c1b401f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzjwm\" (UID: \"cf4fdcc3-7a45-404d-ac8a-86700c1b401f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.146140 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fffb502e-8e9d-4eaa-9132-e166d4ad1386-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n4fwv\" (UID: \"fffb502e-8e9d-4eaa-9132-e166d4ad1386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.146962 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.148420 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-etcd-client\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.148567 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-etcd-ca\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.149744 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf4fdcc3-7a45-404d-ac8a-86700c1b401f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzjwm\" (UID: \"cf4fdcc3-7a45-404d-ac8a-86700c1b401f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm" Jan 23 06:22:16 crc kubenswrapper[4784]: E0123 06:22:16.150169 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:16.65014673 +0000 UTC m=+139.882654904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.151187 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7c339b-1a18-4a89-ad41-889f28df7304-config\") pod \"console-operator-58897d9998-xqdqx\" (UID: \"0a7c339b-1a18-4a89-ad41-889f28df7304\") " pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.151258 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdbbs\" (UniqueName: \"kubernetes.io/projected/52fb80ca-3a92-42b7-a9b6-7de2cb478603-kube-api-access-gdbbs\") pod \"dns-operator-744455d44c-l7559\" (UID: \"52fb80ca-3a92-42b7-a9b6-7de2cb478603\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7559" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.151335 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/409ee6bf-e36a-4a14-9223-32c726962eab-mountpoint-dir\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.151430 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgsvv\" (UniqueName: \"kubernetes.io/projected/3aafef89-45b3-4517-bc7c-f669580a3c1a-kube-api-access-kgsvv\") pod \"migrator-59844c95c7-pfwg8\" (UID: \"3aafef89-45b3-4517-bc7c-f669580a3c1a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfwg8" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.151458 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.151539 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcb815b1-b9f3-4af3-85a7-cf4162be8f7b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-kv2z9\" (UID: \"bcb815b1-b9f3-4af3-85a7-cf4162be8f7b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.151564 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d54c2ab7-ba8c-4e44-b4b5-cdb617753316-config-volume\") pod \"dns-default-5st7s\" (UID: \"d54c2ab7-ba8c-4e44-b4b5-cdb617753316\") " pod="openshift-dns/dns-default-5st7s" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.151591 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82gpf\" (UniqueName: \"kubernetes.io/projected/c7c1f7c1-2f12-473f-b782-750fa84c8b03-kube-api-access-82gpf\") pod \"cluster-samples-operator-665b6dd947-dst6k\" (UID: \"c7c1f7c1-2f12-473f-b782-750fa84c8b03\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dst6k" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.151620 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdklm\" (UniqueName: \"kubernetes.io/projected/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-kube-api-access-fdklm\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.151641 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0bdea249-8d22-4c90-81ed-0fd52338641e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8xhsh\" (UID: \"0bdea249-8d22-4c90-81ed-0fd52338641e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.151666 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqzr\" (UniqueName: \"kubernetes.io/projected/277df242-6850-47b2-af69-2e33cd07657b-kube-api-access-xkqzr\") pod \"olm-operator-6b444d44fb-sgngm\" (UID: \"277df242-6850-47b2-af69-2e33cd07657b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.151688 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9e1d9fb8-3442-41c5-830e-7264ef907208-certs\") pod \"machine-config-server-md2wk\" (UID: \"9e1d9fb8-3442-41c5-830e-7264ef907208\") " pod="openshift-machine-config-operator/machine-config-server-md2wk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.151805 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd230e8a-2ec3-40e3-b964-66279c61bdfb-secret-volume\") pod \"collect-profiles-29485815-bhpt6\" (UID: \"cd230e8a-2ec3-40e3-b964-66279c61bdfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.152023 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-registry-certificates\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.152649 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7c339b-1a18-4a89-ad41-889f28df7304-config\") pod \"console-operator-58897d9998-xqdqx\" (UID: \"0a7c339b-1a18-4a89-ad41-889f28df7304\") " pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.153272 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fffb502e-8e9d-4eaa-9132-e166d4ad1386-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n4fwv\" (UID: \"fffb502e-8e9d-4eaa-9132-e166d4ad1386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.156951 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.163777 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf4fdcc3-7a45-404d-ac8a-86700c1b401f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzjwm\" (UID: \"cf4fdcc3-7a45-404d-ac8a-86700c1b401f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.167687 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcb815b1-b9f3-4af3-85a7-cf4162be8f7b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-kv2z9\" (UID: \"bcb815b1-b9f3-4af3-85a7-cf4162be8f7b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.174911 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-serving-cert\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.174952 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-registry-tls\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.175255 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf4fdcc3-7a45-404d-ac8a-86700c1b401f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fzjwm\" (UID: \"cf4fdcc3-7a45-404d-ac8a-86700c1b401f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.175578 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a7c339b-1a18-4a89-ad41-889f28df7304-serving-cert\") pod \"console-operator-58897d9998-xqdqx\" (UID: \"0a7c339b-1a18-4a89-ad41-889f28df7304\") " pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.176147 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcb815b1-b9f3-4af3-85a7-cf4162be8f7b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-kv2z9\" (UID: \"bcb815b1-b9f3-4af3-85a7-cf4162be8f7b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.183638 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.230049 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.238078 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.252538 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.252814 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/120c77d9-d427-4c8c-87fb-4443fe6ee918-signing-key\") pod \"service-ca-9c57cc56f-6zvvc\" (UID: \"120c77d9-d427-4c8c-87fb-4443fe6ee918\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zvvc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.252847 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v55t6\" (UniqueName: \"kubernetes.io/projected/34f058be-8b3f-4835-aab4-ab7df5f787b0-kube-api-access-v55t6\") pod \"machine-config-operator-74547568cd-w9xbs\" (UID: \"34f058be-8b3f-4835-aab4-ab7df5f787b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.252874 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d54c2ab7-ba8c-4e44-b4b5-cdb617753316-metrics-tls\") pod \"dns-default-5st7s\" (UID: \"d54c2ab7-ba8c-4e44-b4b5-cdb617753316\") " pod="openshift-dns/dns-default-5st7s" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.252900 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk6c7\" (UniqueName: \"kubernetes.io/projected/4f6b7620-7eef-4758-8e27-44453c3925f9-kube-api-access-pk6c7\") pod \"catalog-operator-68c6474976-mtnbf\" (UID: \"4f6b7620-7eef-4758-8e27-44453c3925f9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.252930 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c8a935-b603-40f3-8051-c705e23c20f3-console-serving-cert\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.252954 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-trusted-ca-bundle\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.252990 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs86p\" (UniqueName: \"kubernetes.io/projected/d54c2ab7-ba8c-4e44-b4b5-cdb617753316-kube-api-access-rs86p\") pod \"dns-default-5st7s\" (UID: \"d54c2ab7-ba8c-4e44-b4b5-cdb617753316\") " pod="openshift-dns/dns-default-5st7s" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253027 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dc93f303-432c-4487-a225-f0af2fa5bd49-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n9rnn\" (UID: \"dc93f303-432c-4487-a225-f0af2fa5bd49\") " pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253050 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nmbk\" (UniqueName: \"kubernetes.io/projected/ee95cb1e-738d-4e44-bcd9-978114c4e440-kube-api-access-4nmbk\") pod \"machine-config-controller-84d6567774-wn4qk\" (UID: \"ee95cb1e-738d-4e44-bcd9-978114c4e440\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253074 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhr8h\" (UniqueName: \"kubernetes.io/projected/0bdea249-8d22-4c90-81ed-0fd52338641e-kube-api-access-hhr8h\") pod \"ingress-operator-5b745b69d9-8xhsh\" (UID: \"0bdea249-8d22-4c90-81ed-0fd52338641e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253100 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0bdea249-8d22-4c90-81ed-0fd52338641e-metrics-tls\") pod \"ingress-operator-5b745b69d9-8xhsh\" (UID: \"0bdea249-8d22-4c90-81ed-0fd52338641e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253123 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfm8p\" (UniqueName: \"kubernetes.io/projected/75a8ac2c-f286-499e-9faa-03f25bc7f579-kube-api-access-xfm8p\") pod \"control-plane-machine-set-operator-78cbb6b69f-czr5t\" (UID: \"75a8ac2c-f286-499e-9faa-03f25bc7f579\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-czr5t" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253148 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97fee47e-af30-44f4-b7ce-c7277e65dc35-serving-cert\") pod \"service-ca-operator-777779d784-r4srv\" (UID: \"97fee47e-af30-44f4-b7ce-c7277e65dc35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4srv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253172 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x66ph\" (UniqueName: \"kubernetes.io/projected/120c77d9-d427-4c8c-87fb-4443fe6ee918-kube-api-access-x66ph\") pod \"service-ca-9c57cc56f-6zvvc\" (UID: \"120c77d9-d427-4c8c-87fb-4443fe6ee918\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zvvc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253197 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd230e8a-2ec3-40e3-b964-66279c61bdfb-config-volume\") pod \"collect-profiles-29485815-bhpt6\" (UID: \"cd230e8a-2ec3-40e3-b964-66279c61bdfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253222 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxwgp\" (UniqueName: \"kubernetes.io/projected/9e1d9fb8-3442-41c5-830e-7264ef907208-kube-api-access-cxwgp\") pod \"machine-config-server-md2wk\" (UID: \"9e1d9fb8-3442-41c5-830e-7264ef907208\") " pod="openshift-machine-config-operator/machine-config-server-md2wk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253246 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/409ee6bf-e36a-4a14-9223-32c726962eab-csi-data-dir\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253269 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0db28f19-b83b-46a0-befb-1720ccd656bb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vpgh7\" (UID: \"0db28f19-b83b-46a0-befb-1720ccd656bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253291 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee95cb1e-738d-4e44-bcd9-978114c4e440-proxy-tls\") pod \"machine-config-controller-84d6567774-wn4qk\" (UID: \"ee95cb1e-738d-4e44-bcd9-978114c4e440\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253312 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86f3bcf7-f2f4-4ed0-aae2-be5f61657fac-apiservice-cert\") pod \"packageserver-d55dfcdfc-sk4wc\" (UID: \"86f3bcf7-f2f4-4ed0-aae2-be5f61657fac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253335 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6c8a935-b603-40f3-8051-c705e23c20f3-console-oauth-config\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253360 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86f3bcf7-f2f4-4ed0-aae2-be5f61657fac-webhook-cert\") pod \"packageserver-d55dfcdfc-sk4wc\" (UID: \"86f3bcf7-f2f4-4ed0-aae2-be5f61657fac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253399 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljpp4\" (UniqueName: \"kubernetes.io/projected/dc93f303-432c-4487-a225-f0af2fa5bd49-kube-api-access-ljpp4\") pod \"marketplace-operator-79b997595-n9rnn\" (UID: \"dc93f303-432c-4487-a225-f0af2fa5bd49\") " pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253445 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee95cb1e-738d-4e44-bcd9-978114c4e440-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wn4qk\" (UID: \"ee95cb1e-738d-4e44-bcd9-978114c4e440\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253467 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtkc8\" (UniqueName: \"kubernetes.io/projected/b6c8a935-b603-40f3-8051-c705e23c20f3-kube-api-access-rtkc8\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253492 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34f058be-8b3f-4835-aab4-ab7df5f787b0-images\") pod \"machine-config-operator-74547568cd-w9xbs\" (UID: \"34f058be-8b3f-4835-aab4-ab7df5f787b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" Jan 23 06:22:16 crc kubenswrapper[4784]: E0123 06:22:16.253536 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:16.753494854 +0000 UTC m=+139.986002858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253580 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/277df242-6850-47b2-af69-2e33cd07657b-srv-cert\") pod \"olm-operator-6b444d44fb-sgngm\" (UID: \"277df242-6850-47b2-af69-2e33cd07657b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253608 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3012e555-7659-4858-aa51-cb6ae6fa6a36-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-x5265\" (UID: \"3012e555-7659-4858-aa51-cb6ae6fa6a36\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253647 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0bdea249-8d22-4c90-81ed-0fd52338641e-trusted-ca\") pod \"ingress-operator-5b745b69d9-8xhsh\" (UID: \"0bdea249-8d22-4c90-81ed-0fd52338641e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253681 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc93f303-432c-4487-a225-f0af2fa5bd49-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n9rnn\" (UID: \"dc93f303-432c-4487-a225-f0af2fa5bd49\") " pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253713 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34f058be-8b3f-4835-aab4-ab7df5f787b0-proxy-tls\") pod \"machine-config-operator-74547568cd-w9xbs\" (UID: \"34f058be-8b3f-4835-aab4-ab7df5f787b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253738 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db28f19-b83b-46a0-befb-1720ccd656bb-config\") pod \"kube-controller-manager-operator-78b949d7b-vpgh7\" (UID: \"0db28f19-b83b-46a0-befb-1720ccd656bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253798 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34f058be-8b3f-4835-aab4-ab7df5f787b0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w9xbs\" (UID: \"34f058be-8b3f-4835-aab4-ab7df5f787b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253822 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp8vx\" (UniqueName: \"kubernetes.io/projected/409ee6bf-e36a-4a14-9223-32c726962eab-kube-api-access-wp8vx\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253844 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01651938-c65b-4314-a29c-02aad47fc6be-cert\") pod \"ingress-canary-p55ct\" (UID: \"01651938-c65b-4314-a29c-02aad47fc6be\") " pod="openshift-ingress-canary/ingress-canary-p55ct" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253875 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-service-ca\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253904 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/75a8ac2c-f286-499e-9faa-03f25bc7f579-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-czr5t\" (UID: \"75a8ac2c-f286-499e-9faa-03f25bc7f579\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-czr5t" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253932 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-oauth-serving-cert\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253964 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd2caef4-07a6-420b-80e0-c2f26b044bee-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f8rrr\" (UID: \"bd2caef4-07a6-420b-80e0-c2f26b044bee\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f8rrr" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.253987 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9e1d9fb8-3442-41c5-830e-7264ef907208-node-bootstrap-token\") pod \"machine-config-server-md2wk\" (UID: \"9e1d9fb8-3442-41c5-830e-7264ef907208\") " pod="openshift-machine-config-operator/machine-config-server-md2wk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254023 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-console-config\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254087 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s52pn\" (UniqueName: \"kubernetes.io/projected/01651938-c65b-4314-a29c-02aad47fc6be-kube-api-access-s52pn\") pod \"ingress-canary-p55ct\" (UID: \"01651938-c65b-4314-a29c-02aad47fc6be\") " pod="openshift-ingress-canary/ingress-canary-p55ct" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254110 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4f6b7620-7eef-4758-8e27-44453c3925f9-profile-collector-cert\") pod \"catalog-operator-68c6474976-mtnbf\" (UID: \"4f6b7620-7eef-4758-8e27-44453c3925f9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254159 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/277df242-6850-47b2-af69-2e33cd07657b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sgngm\" (UID: \"277df242-6850-47b2-af69-2e33cd07657b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254181 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7c1f7c1-2f12-473f-b782-750fa84c8b03-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dst6k\" (UID: \"c7c1f7c1-2f12-473f-b782-750fa84c8b03\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dst6k" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254211 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34f058be-8b3f-4835-aab4-ab7df5f787b0-images\") pod \"machine-config-operator-74547568cd-w9xbs\" (UID: \"34f058be-8b3f-4835-aab4-ab7df5f787b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254217 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvp4w\" (UniqueName: \"kubernetes.io/projected/cd230e8a-2ec3-40e3-b964-66279c61bdfb-kube-api-access-dvp4w\") pod \"collect-profiles-29485815-bhpt6\" (UID: \"cd230e8a-2ec3-40e3-b964-66279c61bdfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254247 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/409ee6bf-e36a-4a14-9223-32c726962eab-registration-dir\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254271 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/120c77d9-d427-4c8c-87fb-4443fe6ee918-signing-cabundle\") pod \"service-ca-9c57cc56f-6zvvc\" (UID: \"120c77d9-d427-4c8c-87fb-4443fe6ee918\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zvvc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254293 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e03c05ee-79c1-492f-bc57-f4241be21623-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-h8trm\" (UID: \"e03c05ee-79c1-492f-bc57-f4241be21623\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254342 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp9vq\" (UniqueName: \"kubernetes.io/projected/97fee47e-af30-44f4-b7ce-c7277e65dc35-kube-api-access-gp9vq\") pod \"service-ca-operator-777779d784-r4srv\" (UID: \"97fee47e-af30-44f4-b7ce-c7277e65dc35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4srv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254367 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vbdd\" (UniqueName: \"kubernetes.io/projected/bd2caef4-07a6-420b-80e0-c2f26b044bee-kube-api-access-7vbdd\") pod \"multus-admission-controller-857f4d67dd-f8rrr\" (UID: \"bd2caef4-07a6-420b-80e0-c2f26b044bee\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f8rrr" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254390 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2dqx\" (UniqueName: \"kubernetes.io/projected/3012e555-7659-4858-aa51-cb6ae6fa6a36-kube-api-access-h2dqx\") pod \"package-server-manager-789f6589d5-x5265\" (UID: \"3012e555-7659-4858-aa51-cb6ae6fa6a36\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254419 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254465 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/409ee6bf-e36a-4a14-9223-32c726962eab-mountpoint-dir\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254491 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgsvv\" (UniqueName: \"kubernetes.io/projected/3aafef89-45b3-4517-bc7c-f669580a3c1a-kube-api-access-kgsvv\") pod \"migrator-59844c95c7-pfwg8\" (UID: \"3aafef89-45b3-4517-bc7c-f669580a3c1a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfwg8" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254521 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d54c2ab7-ba8c-4e44-b4b5-cdb617753316-config-volume\") pod \"dns-default-5st7s\" (UID: \"d54c2ab7-ba8c-4e44-b4b5-cdb617753316\") " pod="openshift-dns/dns-default-5st7s" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254549 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82gpf\" (UniqueName: \"kubernetes.io/projected/c7c1f7c1-2f12-473f-b782-750fa84c8b03-kube-api-access-82gpf\") pod \"cluster-samples-operator-665b6dd947-dst6k\" (UID: \"c7c1f7c1-2f12-473f-b782-750fa84c8b03\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dst6k" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254583 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0bdea249-8d22-4c90-81ed-0fd52338641e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8xhsh\" (UID: \"0bdea249-8d22-4c90-81ed-0fd52338641e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254607 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkqzr\" (UniqueName: \"kubernetes.io/projected/277df242-6850-47b2-af69-2e33cd07657b-kube-api-access-xkqzr\") pod \"olm-operator-6b444d44fb-sgngm\" (UID: \"277df242-6850-47b2-af69-2e33cd07657b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254635 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9e1d9fb8-3442-41c5-830e-7264ef907208-certs\") pod \"machine-config-server-md2wk\" (UID: \"9e1d9fb8-3442-41c5-830e-7264ef907208\") " pod="openshift-machine-config-operator/machine-config-server-md2wk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254660 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd230e8a-2ec3-40e3-b964-66279c61bdfb-secret-volume\") pod \"collect-profiles-29485815-bhpt6\" (UID: \"cd230e8a-2ec3-40e3-b964-66279c61bdfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254694 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxnvh\" (UniqueName: \"kubernetes.io/projected/86f3bcf7-f2f4-4ed0-aae2-be5f61657fac-kube-api-access-pxnvh\") pod \"packageserver-d55dfcdfc-sk4wc\" (UID: \"86f3bcf7-f2f4-4ed0-aae2-be5f61657fac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254710 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0db28f19-b83b-46a0-befb-1720ccd656bb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vpgh7\" (UID: \"0db28f19-b83b-46a0-befb-1720ccd656bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254726 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e03c05ee-79c1-492f-bc57-f4241be21623-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-h8trm\" (UID: \"e03c05ee-79c1-492f-bc57-f4241be21623\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254765 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97fee47e-af30-44f4-b7ce-c7277e65dc35-config\") pod \"service-ca-operator-777779d784-r4srv\" (UID: \"97fee47e-af30-44f4-b7ce-c7277e65dc35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4srv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254780 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/409ee6bf-e36a-4a14-9223-32c726962eab-csi-data-dir\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254790 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/409ee6bf-e36a-4a14-9223-32c726962eab-socket-dir\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254827 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/86f3bcf7-f2f4-4ed0-aae2-be5f61657fac-tmpfs\") pod \"packageserver-d55dfcdfc-sk4wc\" (UID: \"86f3bcf7-f2f4-4ed0-aae2-be5f61657fac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254856 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/409ee6bf-e36a-4a14-9223-32c726962eab-plugins-dir\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254884 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mcrw\" (UniqueName: \"kubernetes.io/projected/e03c05ee-79c1-492f-bc57-f4241be21623-kube-api-access-6mcrw\") pod \"kube-storage-version-migrator-operator-b67b599dd-h8trm\" (UID: \"e03c05ee-79c1-492f-bc57-f4241be21623\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.254908 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f6b7620-7eef-4758-8e27-44453c3925f9-srv-cert\") pod \"catalog-operator-68c6474976-mtnbf\" (UID: \"4f6b7620-7eef-4758-8e27-44453c3925f9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.255101 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/409ee6bf-e36a-4a14-9223-32c726962eab-socket-dir\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.256138 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/409ee6bf-e36a-4a14-9223-32c726962eab-mountpoint-dir\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.256171 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/409ee6bf-e36a-4a14-9223-32c726962eab-plugins-dir\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.256184 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/86f3bcf7-f2f4-4ed0-aae2-be5f61657fac-tmpfs\") pod \"packageserver-d55dfcdfc-sk4wc\" (UID: \"86f3bcf7-f2f4-4ed0-aae2-be5f61657fac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.256866 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db28f19-b83b-46a0-befb-1720ccd656bb-config\") pod \"kube-controller-manager-operator-78b949d7b-vpgh7\" (UID: \"0db28f19-b83b-46a0-befb-1720ccd656bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.257291 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd230e8a-2ec3-40e3-b964-66279c61bdfb-config-volume\") pod \"collect-profiles-29485815-bhpt6\" (UID: \"cd230e8a-2ec3-40e3-b964-66279c61bdfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.258354 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/409ee6bf-e36a-4a14-9223-32c726962eab-registration-dir\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.259175 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/120c77d9-d427-4c8c-87fb-4443fe6ee918-signing-cabundle\") pod \"service-ca-9c57cc56f-6zvvc\" (UID: \"120c77d9-d427-4c8c-87fb-4443fe6ee918\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zvvc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.259659 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e03c05ee-79c1-492f-bc57-f4241be21623-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-h8trm\" (UID: \"e03c05ee-79c1-492f-bc57-f4241be21623\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm" Jan 23 06:22:16 crc kubenswrapper[4784]: E0123 06:22:16.260091 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:16.760076533 +0000 UTC m=+139.992584507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.261224 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/277df242-6850-47b2-af69-2e33cd07657b-srv-cert\") pod \"olm-operator-6b444d44fb-sgngm\" (UID: \"277df242-6850-47b2-af69-2e33cd07657b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.261665 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ee95cb1e-738d-4e44-bcd9-978114c4e440-proxy-tls\") pod \"machine-config-controller-84d6567774-wn4qk\" (UID: \"ee95cb1e-738d-4e44-bcd9-978114c4e440\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.262122 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86f3bcf7-f2f4-4ed0-aae2-be5f61657fac-apiservice-cert\") pod \"packageserver-d55dfcdfc-sk4wc\" (UID: \"86f3bcf7-f2f4-4ed0-aae2-be5f61657fac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.262566 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f6b7620-7eef-4758-8e27-44453c3925f9-srv-cert\") pod \"catalog-operator-68c6474976-mtnbf\" (UID: \"4f6b7620-7eef-4758-8e27-44453c3925f9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.263895 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86f3bcf7-f2f4-4ed0-aae2-be5f61657fac-webhook-cert\") pod \"packageserver-d55dfcdfc-sk4wc\" (UID: \"86f3bcf7-f2f4-4ed0-aae2-be5f61657fac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.264550 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee95cb1e-738d-4e44-bcd9-978114c4e440-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wn4qk\" (UID: \"ee95cb1e-738d-4e44-bcd9-978114c4e440\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.265040 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0bdea249-8d22-4c90-81ed-0fd52338641e-trusted-ca\") pod \"ingress-operator-5b745b69d9-8xhsh\" (UID: \"0bdea249-8d22-4c90-81ed-0fd52338641e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.266085 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc93f303-432c-4487-a225-f0af2fa5bd49-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n9rnn\" (UID: \"dc93f303-432c-4487-a225-f0af2fa5bd49\") " pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.368207 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6c8a935-b603-40f3-8051-c705e23c20f3-console-oauth-config\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.368520 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c8a935-b603-40f3-8051-c705e23c20f3-console-serving-cert\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.368959 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d54c2ab7-ba8c-4e44-b4b5-cdb617753316-metrics-tls\") pod \"dns-default-5st7s\" (UID: \"d54c2ab7-ba8c-4e44-b4b5-cdb617753316\") " pod="openshift-dns/dns-default-5st7s" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.369340 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34f058be-8b3f-4835-aab4-ab7df5f787b0-proxy-tls\") pod \"machine-config-operator-74547568cd-w9xbs\" (UID: \"34f058be-8b3f-4835-aab4-ab7df5f787b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.369819 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd230e8a-2ec3-40e3-b964-66279c61bdfb-secret-volume\") pod \"collect-profiles-29485815-bhpt6\" (UID: \"cd230e8a-2ec3-40e3-b964-66279c61bdfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.370149 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0db28f19-b83b-46a0-befb-1720ccd656bb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vpgh7\" (UID: \"0db28f19-b83b-46a0-befb-1720ccd656bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.371140 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7c1f7c1-2f12-473f-b782-750fa84c8b03-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dst6k\" (UID: \"c7c1f7c1-2f12-473f-b782-750fa84c8b03\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dst6k" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.371640 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97fee47e-af30-44f4-b7ce-c7277e65dc35-serving-cert\") pod \"service-ca-operator-777779d784-r4srv\" (UID: \"97fee47e-af30-44f4-b7ce-c7277e65dc35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4srv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.372157 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0bdea249-8d22-4c90-81ed-0fd52338641e-metrics-tls\") pod \"ingress-operator-5b745b69d9-8xhsh\" (UID: \"0bdea249-8d22-4c90-81ed-0fd52338641e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.372712 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dc93f303-432c-4487-a225-f0af2fa5bd49-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n9rnn\" (UID: \"dc93f303-432c-4487-a225-f0af2fa5bd49\") " pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.372864 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3012e555-7659-4858-aa51-cb6ae6fa6a36-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-x5265\" (UID: \"3012e555-7659-4858-aa51-cb6ae6fa6a36\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.373195 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4f6b7620-7eef-4758-8e27-44453c3925f9-profile-collector-cert\") pod \"catalog-operator-68c6474976-mtnbf\" (UID: \"4f6b7620-7eef-4758-8e27-44453c3925f9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.374200 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-trusted-ca-bundle\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.374466 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-trusted-ca\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.377091 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9e1d9fb8-3442-41c5-830e-7264ef907208-certs\") pod \"machine-config-server-md2wk\" (UID: \"9e1d9fb8-3442-41c5-830e-7264ef907208\") " pod="openshift-machine-config-operator/machine-config-server-md2wk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.380405 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.380661 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/277df242-6850-47b2-af69-2e33cd07657b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sgngm\" (UID: \"277df242-6850-47b2-af69-2e33cd07657b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.381619 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjdnz\" (UniqueName: \"kubernetes.io/projected/fffb502e-8e9d-4eaa-9132-e166d4ad1386-kube-api-access-xjdnz\") pod \"openshift-apiserver-operator-796bbdcf4f-n4fwv\" (UID: \"fffb502e-8e9d-4eaa-9132-e166d4ad1386\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.382128 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d54c2ab7-ba8c-4e44-b4b5-cdb617753316-config-volume\") pod \"dns-default-5st7s\" (UID: \"d54c2ab7-ba8c-4e44-b4b5-cdb617753316\") " pod="openshift-dns/dns-default-5st7s" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.389260 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-service-ca\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: E0123 06:22:16.390634 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:16.890375111 +0000 UTC m=+140.122883095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.392086 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01651938-c65b-4314-a29c-02aad47fc6be-cert\") pod \"ingress-canary-p55ct\" (UID: \"01651938-c65b-4314-a29c-02aad47fc6be\") " pod="openshift-ingress-canary/ingress-canary-p55ct" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.393408 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: E0123 06:22:16.394216 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:16.894200195 +0000 UTC m=+140.126708179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.394557 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/120c77d9-d427-4c8c-87fb-4443fe6ee918-signing-key\") pod \"service-ca-9c57cc56f-6zvvc\" (UID: \"120c77d9-d427-4c8c-87fb-4443fe6ee918\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zvvc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.395628 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9e1d9fb8-3442-41c5-830e-7264ef907208-node-bootstrap-token\") pod \"machine-config-server-md2wk\" (UID: \"9e1d9fb8-3442-41c5-830e-7264ef907208\") " pod="openshift-machine-config-operator/machine-config-server-md2wk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.396179 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34f058be-8b3f-4835-aab4-ab7df5f787b0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w9xbs\" (UID: \"34f058be-8b3f-4835-aab4-ab7df5f787b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.396380 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/52fb80ca-3a92-42b7-a9b6-7de2cb478603-metrics-tls\") pod \"dns-operator-744455d44c-l7559\" (UID: \"52fb80ca-3a92-42b7-a9b6-7de2cb478603\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7559" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.396911 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd2caef4-07a6-420b-80e0-c2f26b044bee-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f8rrr\" (UID: \"bd2caef4-07a6-420b-80e0-c2f26b044bee\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f8rrr" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.397326 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2rnk\" (UniqueName: \"kubernetes.io/projected/bcb815b1-b9f3-4af3-85a7-cf4162be8f7b-kube-api-access-l2rnk\") pod \"cluster-image-registry-operator-dc59b4c8b-kv2z9\" (UID: \"bcb815b1-b9f3-4af3-85a7-cf4162be8f7b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.398622 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-console-config\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.399075 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-oauth-serving-cert\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.400712 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97fee47e-af30-44f4-b7ce-c7277e65dc35-config\") pod \"service-ca-operator-777779d784-r4srv\" (UID: \"97fee47e-af30-44f4-b7ce-c7277e65dc35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4srv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.401317 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdklm\" (UniqueName: \"kubernetes.io/projected/e2ebc8cd-38f9-4337-a299-7faafe40b3c4-kube-api-access-fdklm\") pod \"etcd-operator-b45778765-fsrlb\" (UID: \"e2ebc8cd-38f9-4337-a299-7faafe40b3c4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.415073 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdzdp\" (UniqueName: \"kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-kube-api-access-bdzdp\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.415625 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-bound-sa-token\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.415994 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcb815b1-b9f3-4af3-85a7-cf4162be8f7b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-kv2z9\" (UID: \"bcb815b1-b9f3-4af3-85a7-cf4162be8f7b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.417570 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gswbh\" (UniqueName: \"kubernetes.io/projected/b03d7aa3-b8a0-4725-b16d-908e50b963e4-kube-api-access-gswbh\") pod \"downloads-7954f5f757-bb5s2\" (UID: \"b03d7aa3-b8a0-4725-b16d-908e50b963e4\") " pod="openshift-console/downloads-7954f5f757-bb5s2" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.435024 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e03c05ee-79c1-492f-bc57-f4241be21623-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-h8trm\" (UID: \"e03c05ee-79c1-492f-bc57-f4241be21623\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.436097 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/75a8ac2c-f286-499e-9faa-03f25bc7f579-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-czr5t\" (UID: \"75a8ac2c-f286-499e-9faa-03f25bc7f579\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-czr5t" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.436135 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0db28f19-b83b-46a0-befb-1720ccd656bb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vpgh7\" (UID: \"0db28f19-b83b-46a0-befb-1720ccd656bb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.436985 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x66ph\" (UniqueName: \"kubernetes.io/projected/120c77d9-d427-4c8c-87fb-4443fe6ee918-kube-api-access-x66ph\") pod \"service-ca-9c57cc56f-6zvvc\" (UID: \"120c77d9-d427-4c8c-87fb-4443fe6ee918\") " pod="openshift-service-ca/service-ca-9c57cc56f-6zvvc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.437576 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdbbs\" (UniqueName: \"kubernetes.io/projected/52fb80ca-3a92-42b7-a9b6-7de2cb478603-kube-api-access-gdbbs\") pod \"dns-operator-744455d44c-l7559\" (UID: \"52fb80ca-3a92-42b7-a9b6-7de2cb478603\") " pod="openshift-dns-operator/dns-operator-744455d44c-l7559" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.439556 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs86p\" (UniqueName: \"kubernetes.io/projected/d54c2ab7-ba8c-4e44-b4b5-cdb617753316-kube-api-access-rs86p\") pod \"dns-default-5st7s\" (UID: \"d54c2ab7-ba8c-4e44-b4b5-cdb617753316\") " pod="openshift-dns/dns-default-5st7s" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.443010 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhr8h\" (UniqueName: \"kubernetes.io/projected/0bdea249-8d22-4c90-81ed-0fd52338641e-kube-api-access-hhr8h\") pod \"ingress-operator-5b745b69d9-8xhsh\" (UID: \"0bdea249-8d22-4c90-81ed-0fd52338641e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.452473 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh8fh\" (UniqueName: \"kubernetes.io/projected/0a7c339b-1a18-4a89-ad41-889f28df7304-kube-api-access-gh8fh\") pod \"console-operator-58897d9998-xqdqx\" (UID: \"0a7c339b-1a18-4a89-ad41-889f28df7304\") " pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.460855 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.465143 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bb5s2" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.476807 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-l7559" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.494990 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:16 crc kubenswrapper[4784]: E0123 06:22:16.495474 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:16.995453821 +0000 UTC m=+140.227961795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.499077 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.507444 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.512776 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxwgp\" (UniqueName: \"kubernetes.io/projected/9e1d9fb8-3442-41c5-830e-7264ef907208-kube-api-access-cxwgp\") pod \"machine-config-server-md2wk\" (UID: \"9e1d9fb8-3442-41c5-830e-7264ef907208\") " pod="openshift-machine-config-operator/machine-config-server-md2wk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.515242 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.526902 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mcrw\" (UniqueName: \"kubernetes.io/projected/e03c05ee-79c1-492f-bc57-f4241be21623-kube-api-access-6mcrw\") pod \"kube-storage-version-migrator-operator-b67b599dd-h8trm\" (UID: \"e03c05ee-79c1-492f-bc57-f4241be21623\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.543149 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk6c7\" (UniqueName: \"kubernetes.io/projected/4f6b7620-7eef-4758-8e27-44453c3925f9-kube-api-access-pk6c7\") pod \"catalog-operator-68c6474976-mtnbf\" (UID: \"4f6b7620-7eef-4758-8e27-44453c3925f9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.555147 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v55t6\" (UniqueName: \"kubernetes.io/projected/34f058be-8b3f-4835-aab4-ab7df5f787b0-kube-api-access-v55t6\") pod \"machine-config-operator-74547568cd-w9xbs\" (UID: \"34f058be-8b3f-4835-aab4-ab7df5f787b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.560548 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.565980 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvp4w\" (UniqueName: \"kubernetes.io/projected/cd230e8a-2ec3-40e3-b964-66279c61bdfb-kube-api-access-dvp4w\") pod \"collect-profiles-29485815-bhpt6\" (UID: \"cd230e8a-2ec3-40e3-b964-66279c61bdfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.581648 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nmbk\" (UniqueName: \"kubernetes.io/projected/ee95cb1e-738d-4e44-bcd9-978114c4e440-kube-api-access-4nmbk\") pod \"machine-config-controller-84d6567774-wn4qk\" (UID: \"ee95cb1e-738d-4e44-bcd9-978114c4e440\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.589489 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.597017 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: E0123 06:22:16.597612 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:17.097599022 +0000 UTC m=+140.330106996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.610090 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.613991 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp9vq\" (UniqueName: \"kubernetes.io/projected/97fee47e-af30-44f4-b7ce-c7277e65dc35-kube-api-access-gp9vq\") pod \"service-ca-operator-777779d784-r4srv\" (UID: \"97fee47e-af30-44f4-b7ce-c7277e65dc35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4srv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.617345 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.620874 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vbdd\" (UniqueName: \"kubernetes.io/projected/bd2caef4-07a6-420b-80e0-c2f26b044bee-kube-api-access-7vbdd\") pod \"multus-admission-controller-857f4d67dd-f8rrr\" (UID: \"bd2caef4-07a6-420b-80e0-c2f26b044bee\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f8rrr" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.621952 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2dqx\" (UniqueName: \"kubernetes.io/projected/3012e555-7659-4858-aa51-cb6ae6fa6a36-kube-api-access-h2dqx\") pod \"package-server-manager-789f6589d5-x5265\" (UID: \"3012e555-7659-4858-aa51-cb6ae6fa6a36\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.633140 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6zvvc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.646608 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.653135 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.657211 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljpp4\" (UniqueName: \"kubernetes.io/projected/dc93f303-432c-4487-a225-f0af2fa5bd49-kube-api-access-ljpp4\") pod \"marketplace-operator-79b997595-n9rnn\" (UID: \"dc93f303-432c-4487-a225-f0af2fa5bd49\") " pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.661448 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtkc8\" (UniqueName: \"kubernetes.io/projected/b6c8a935-b603-40f3-8051-c705e23c20f3-kube-api-access-rtkc8\") pod \"console-f9d7485db-2stcb\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.661940 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-f8rrr" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.670097 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.677347 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.683955 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hhgpf"] Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.688689 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxnvh\" (UniqueName: \"kubernetes.io/projected/86f3bcf7-f2f4-4ed0-aae2-be5f61657fac-kube-api-access-pxnvh\") pod \"packageserver-d55dfcdfc-sk4wc\" (UID: \"86f3bcf7-f2f4-4ed0-aae2-be5f61657fac\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:16 crc kubenswrapper[4784]: E0123 06:22:16.697935 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:17.197904502 +0000 UTC m=+140.430412476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.697985 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.698536 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: E0123 06:22:16.699208 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:17.199198038 +0000 UTC m=+140.431706012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.704023 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgsvv\" (UniqueName: \"kubernetes.io/projected/3aafef89-45b3-4517-bc7c-f669580a3c1a-kube-api-access-kgsvv\") pod \"migrator-59844c95c7-pfwg8\" (UID: \"3aafef89-45b3-4517-bc7c-f669580a3c1a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfwg8" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.716799 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82gpf\" (UniqueName: \"kubernetes.io/projected/c7c1f7c1-2f12-473f-b782-750fa84c8b03-kube-api-access-82gpf\") pod \"cluster-samples-operator-665b6dd947-dst6k\" (UID: \"c7c1f7c1-2f12-473f-b782-750fa84c8b03\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dst6k" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.726036 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5st7s" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.740170 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-md2wk" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.751252 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.770548 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkqzr\" (UniqueName: \"kubernetes.io/projected/277df242-6850-47b2-af69-2e33cd07657b-kube-api-access-xkqzr\") pod \"olm-operator-6b444d44fb-sgngm\" (UID: \"277df242-6850-47b2-af69-2e33cd07657b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.773482 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0bdea249-8d22-4c90-81ed-0fd52338641e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8xhsh\" (UID: \"0bdea249-8d22-4c90-81ed-0fd52338641e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.798196 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfm8p\" (UniqueName: \"kubernetes.io/projected/75a8ac2c-f286-499e-9faa-03f25bc7f579-kube-api-access-xfm8p\") pod \"control-plane-machine-set-operator-78cbb6b69f-czr5t\" (UID: \"75a8ac2c-f286-499e-9faa-03f25bc7f579\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-czr5t" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.800920 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:16 crc kubenswrapper[4784]: E0123 06:22:16.801396 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:17.301370979 +0000 UTC m=+140.533878963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.822360 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp8vx\" (UniqueName: \"kubernetes.io/projected/409ee6bf-e36a-4a14-9223-32c726962eab-kube-api-access-wp8vx\") pod \"csi-hostpathplugin-6q96w\" (UID: \"409ee6bf-e36a-4a14-9223-32c726962eab\") " pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.845295 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dst6k" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.853584 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.875596 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s52pn\" (UniqueName: \"kubernetes.io/projected/01651938-c65b-4314-a29c-02aad47fc6be-kube-api-access-s52pn\") pod \"ingress-canary-p55ct\" (UID: \"01651938-c65b-4314-a29c-02aad47fc6be\") " pod="openshift-ingress-canary/ingress-canary-p55ct" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.875937 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.885402 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfwg8" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.902242 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4srv" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.903505 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:16 crc kubenswrapper[4784]: E0123 06:22:16.904246 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:17.40423169 +0000 UTC m=+140.636739664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.925405 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.939640 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-czr5t" Jan 23 06:22:16 crc kubenswrapper[4784]: I0123 06:22:16.951797 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.004939 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:17 crc kubenswrapper[4784]: E0123 06:22:17.005341 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:17.505324152 +0000 UTC m=+140.737832126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.042592 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6q96w" Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.042806 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-p55ct" Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.106701 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:17 crc kubenswrapper[4784]: E0123 06:22:17.107320 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:17.607295099 +0000 UTC m=+140.839803073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.207202 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" event={"ID":"fc182930-d86c-46a4-b3fd-493ef396e20b","Type":"ContainerStarted","Data":"cda6fc06260fa1124bd471a912341ee5b3c6144d874fe8bae3092f6d2a79bc73"} Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.208685 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-md2wk" event={"ID":"9e1d9fb8-3442-41c5-830e-7264ef907208","Type":"ContainerStarted","Data":"fdbe4f0803ce6e90042eadf296f94d06c35b98b4b978766d4ffc9b26bbdc187e"} Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.219864 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gvzxz" event={"ID":"0d1c5a4a-d067-4ab8-b623-82a192c3bb07","Type":"ContainerStarted","Data":"923b5cb6fc8a57c49833c1b71e1c4bcc8eb5e27261cc02e8bef55a17845e799d"} Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.219921 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gvzxz" event={"ID":"0d1c5a4a-d067-4ab8-b623-82a192c3bb07","Type":"ContainerStarted","Data":"a93ac36efa3958771e3e4fd76d552eaa969366e24a8cfb9e7c1e481ef188a09a"} Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.223431 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:17 crc kubenswrapper[4784]: E0123 06:22:17.223773 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:17.723742248 +0000 UTC m=+140.956250222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.240619 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.242802 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.242901 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.310267 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" event={"ID":"4cbb22dd-2c0b-4be3-80b5-affe170bb787","Type":"ContainerStarted","Data":"98940a8fe7a97d3641da1ca9442024abe878295e26b5e9c8c195d14751bcf6c5"} Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.325817 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:17 crc kubenswrapper[4784]: E0123 06:22:17.327371 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:17.827353059 +0000 UTC m=+141.059861033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.429153 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:17 crc kubenswrapper[4784]: E0123 06:22:17.429672 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:17.929651714 +0000 UTC m=+141.162159678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.467483 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4r4ds"] Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.531553 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:17 crc kubenswrapper[4784]: E0123 06:22:17.532099 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:18.032076803 +0000 UTC m=+141.264584777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.633078 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:17 crc kubenswrapper[4784]: E0123 06:22:17.633492 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:18.133467884 +0000 UTC m=+141.365975858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.735147 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:17 crc kubenswrapper[4784]: E0123 06:22:17.735619 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:18.235594064 +0000 UTC m=+141.468102218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.838060 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:17 crc kubenswrapper[4784]: E0123 06:22:17.838544 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:18.338520436 +0000 UTC m=+141.571028410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:17 crc kubenswrapper[4784]: I0123 06:22:17.939784 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:17 crc kubenswrapper[4784]: E0123 06:22:17.940347 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:18.440317327 +0000 UTC m=+141.672825471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.041070 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:18 crc kubenswrapper[4784]: E0123 06:22:18.041344 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:18.541323707 +0000 UTC m=+141.773831681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.143206 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:18 crc kubenswrapper[4784]: E0123 06:22:18.143722 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:18.643700864 +0000 UTC m=+141.876208838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.193212 4784 csr.go:261] certificate signing request csr-chcrr is approved, waiting to be issued Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.198334 4784 csr.go:257] certificate signing request csr-chcrr is issued Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.206777 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7k2sb" podStartSLOduration=120.20673598 podStartE2EDuration="2m0.20673598s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:18.205459746 +0000 UTC m=+141.437967720" watchObservedRunningTime="2026-01-23 06:22:18.20673598 +0000 UTC m=+141.439243954" Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.244449 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:18 crc kubenswrapper[4784]: E0123 06:22:18.244727 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:18.744682383 +0000 UTC m=+141.977190357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.245043 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:18 crc kubenswrapper[4784]: E0123 06:22:18.245376 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:18.745362032 +0000 UTC m=+141.977870006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.251943 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-gvzxz" podStartSLOduration=120.25190559 podStartE2EDuration="2m0.25190559s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:18.251759866 +0000 UTC m=+141.484267840" watchObservedRunningTime="2026-01-23 06:22:18.25190559 +0000 UTC m=+141.484413564" Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.307128 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" event={"ID":"85a2a44a-7e65-45f7-bd20-b895f5f09c73","Type":"ContainerStarted","Data":"6967e30ae4b73d012b5c03791ac4d2b38eea8aafc673861ccf0ac4bc073a76a3"} Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.323621 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-md2wk" event={"ID":"9e1d9fb8-3442-41c5-830e-7264ef907208","Type":"ContainerStarted","Data":"9322b42e064139c0012158bc6a1562e44ca1c9a07499fd79825f0f3966a98bca"} Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.333383 4784 generic.go:334] "Generic (PLEG): container finished" podID="4cbb22dd-2c0b-4be3-80b5-affe170bb787" containerID="2e8d63491525a9c1140a5c173910ed0865ced8e0deca971a9b2a076bddb31ffe" exitCode=0 Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.333553 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" event={"ID":"4cbb22dd-2c0b-4be3-80b5-affe170bb787","Type":"ContainerDied","Data":"2e8d63491525a9c1140a5c173910ed0865ced8e0deca971a9b2a076bddb31ffe"} Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.346050 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:18 crc kubenswrapper[4784]: E0123 06:22:18.346368 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:18.846338731 +0000 UTC m=+142.078846705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.346910 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:18 crc kubenswrapper[4784]: E0123 06:22:18.347525 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:18.847515613 +0000 UTC m=+142.080023587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.396851 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-md2wk" podStartSLOduration=5.396819786 podStartE2EDuration="5.396819786s" podCreationTimestamp="2026-01-23 06:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:18.362378988 +0000 UTC m=+141.594886972" watchObservedRunningTime="2026-01-23 06:22:18.396819786 +0000 UTC m=+141.629327780" Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.448646 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:18 crc kubenswrapper[4784]: E0123 06:22:18.450959 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:18.950930608 +0000 UTC m=+142.183438582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.551702 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:18 crc kubenswrapper[4784]: E0123 06:22:18.552504 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:19.052483033 +0000 UTC m=+142.284991007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.585151 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:18 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:18 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:18 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.585228 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.652867 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:18 crc kubenswrapper[4784]: E0123 06:22:18.653221 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:19.153206876 +0000 UTC m=+142.385714850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.745833 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7"] Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.756285 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:18 crc kubenswrapper[4784]: E0123 06:22:18.756782 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:19.256729374 +0000 UTC m=+142.489237348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.769269 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-skjzx"] Jan 23 06:22:18 crc kubenswrapper[4784]: W0123 06:22:18.779238 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod709308c5_9977_4e05_98f0_b745c298db67.slice/crio-1efaa9123218c444f1f183eafbced58c2d4ccfc2ba082c7d5c5448a3668e9683 WatchSource:0}: Error finding container 1efaa9123218c444f1f183eafbced58c2d4ccfc2ba082c7d5c5448a3668e9683: Status 404 returned error can't find the container with id 1efaa9123218c444f1f183eafbced58c2d4ccfc2ba082c7d5c5448a3668e9683 Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.832728 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn"] Jan 23 06:22:18 crc kubenswrapper[4784]: E0123 06:22:18.859568 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:19.359542333 +0000 UTC m=+142.592050297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.859463 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.859976 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:18 crc kubenswrapper[4784]: E0123 06:22:18.860413 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:19.360403746 +0000 UTC m=+142.592911720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.884826 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq"] Jan 23 06:22:18 crc kubenswrapper[4784]: W0123 06:22:18.921257 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51a0574a_18f3_4fea_b3c9_ed345668f240.slice/crio-220f1e93d4e5326a7e857eb968c136316608789d40b61abfb21899930d379b20 WatchSource:0}: Error finding container 220f1e93d4e5326a7e857eb968c136316608789d40b61abfb21899930d379b20: Status 404 returned error can't find the container with id 220f1e93d4e5326a7e857eb968c136316608789d40b61abfb21899930d379b20 Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.923699 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ltcmm"] Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.937638 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4wkg9"] Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.958648 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pwcvq"] Jan 23 06:22:18 crc kubenswrapper[4784]: I0123 06:22:18.960535 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:18 crc kubenswrapper[4784]: E0123 06:22:18.961231 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:19.461121579 +0000 UTC m=+142.693629553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:18 crc kubenswrapper[4784]: W0123 06:22:18.988228 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb1480f7_5616_46a7_a37f_479f33615b7f.slice/crio-02704426ae8fbbdd91cfe2688aecd5aaf35e83cc3eb43737cf17ea5ce63b32af WatchSource:0}: Error finding container 02704426ae8fbbdd91cfe2688aecd5aaf35e83cc3eb43737cf17ea5ce63b32af: Status 404 returned error can't find the container with id 02704426ae8fbbdd91cfe2688aecd5aaf35e83cc3eb43737cf17ea5ce63b32af Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.069095 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:19 crc kubenswrapper[4784]: E0123 06:22:19.069537 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:19.569522269 +0000 UTC m=+142.802030243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.187718 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:19 crc kubenswrapper[4784]: E0123 06:22:19.188523 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:19.688496079 +0000 UTC m=+142.921004053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.200285 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-23 06:17:18 +0000 UTC, rotation deadline is 2026-11-13 06:58:24.410583307 +0000 UTC Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.200353 4784 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7056h36m5.210234675s for next certificate rotation Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.242636 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:19 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:19 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:19 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.242962 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.294103 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:19 crc kubenswrapper[4784]: E0123 06:22:19.294559 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:19.794542905 +0000 UTC m=+143.027050879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.359973 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" event={"ID":"32f1325e-ec9d-4375-855d-970361b2ac03","Type":"ContainerStarted","Data":"77031adbce31a831e7525bc0904a08c82d478b33a6803f69b375081501245680"} Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.363809 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n9rnn"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.372586 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" event={"ID":"f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39","Type":"ContainerStarted","Data":"c871567929cc6e43b2f235cc54ecd74836f68d9eab0e1bc70fe89f9c9903f506"} Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.372637 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" event={"ID":"f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39","Type":"ContainerStarted","Data":"774f14b8543e35735e72755366a7c18bb1233e585f5cccfddf24a168613edc89"} Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.387048 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.389548 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.391679 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" event={"ID":"85a2a44a-7e65-45f7-bd20-b895f5f09c73","Type":"ContainerStarted","Data":"0d191427d4f026ed44e868b055175c7e1dca095073c31a67ca69eb3f2e1398db"} Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.392951 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.393669 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.397620 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:19 crc kubenswrapper[4784]: E0123 06:22:19.398280 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:19.898252929 +0000 UTC m=+143.130760903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.412766 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bb5s2"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.413202 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.430769 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" event={"ID":"51a0574a-18f3-4fea-b3c9-ed345668f240","Type":"ContainerStarted","Data":"220f1e93d4e5326a7e857eb968c136316608789d40b61abfb21899930d379b20"} Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.432518 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xqdqx"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.444765 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-f8rrr"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.535309 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.554418 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:19 crc kubenswrapper[4784]: E0123 06:22:19.557554 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:20.057538636 +0000 UTC m=+143.290046610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.567494 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" event={"ID":"ba69339a-1102-4a25-ae4e-a70b643e6ff1","Type":"ContainerStarted","Data":"b7a1d07c8dd088bbeda03b03febeb3b021626dbf8a38d958e2ceaf51054c99e4"} Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.567577 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" event={"ID":"ba69339a-1102-4a25-ae4e-a70b643e6ff1","Type":"ContainerStarted","Data":"289c59ee0ba7b0bfa076f4a1b4cd97d6080e68b4dd65803551cfae257a8d66bf"} Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.577848 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.578287 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" podStartSLOduration=121.57825454 podStartE2EDuration="2m1.57825454s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:19.506905537 +0000 UTC m=+142.739413521" watchObservedRunningTime="2026-01-23 06:22:19.57825454 +0000 UTC m=+142.810762514" Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.587835 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pfwg8"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.598245 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.625994 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l7559"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.639049 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" event={"ID":"4cbb22dd-2c0b-4be3-80b5-affe170bb787","Type":"ContainerStarted","Data":"d0601c54470cf6c8029b16e83aa624a9a8c588b8ba1bbdd78952b8a53a7592af"} Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.639112 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" event={"ID":"4cbb22dd-2c0b-4be3-80b5-affe170bb787","Type":"ContainerStarted","Data":"8db41cdfd57b731d9afc3c4e9dee640211199512f42d4a6eceeff3e379861dc6"} Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.650533 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.652744 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn" event={"ID":"ea1f6b10-9910-420e-96c7-cfd389d931c4","Type":"ContainerStarted","Data":"d38a107a6d479fd580c557ccc4302c7b6fd9e40912f4bb9dc6263f3ed769712d"} Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.655707 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:19 crc kubenswrapper[4784]: E0123 06:22:19.657534 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:20.157509827 +0000 UTC m=+143.390017801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.667765 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" event={"ID":"fb1480f7-5616-46a7-a37f-479f33615b7f","Type":"ContainerStarted","Data":"02704426ae8fbbdd91cfe2688aecd5aaf35e83cc3eb43737cf17ea5ce63b32af"} Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.676105 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.680626 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" event={"ID":"709308c5-9977-4e05-98f0-b745c298db67","Type":"ContainerStarted","Data":"59750aabf85c59d3109ec4c138f837f0786a9b9d1c487300eba237238dab03f5"} Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.680660 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.680671 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" event={"ID":"709308c5-9977-4e05-98f0-b745c298db67","Type":"ContainerStarted","Data":"1efaa9123218c444f1f183eafbced58c2d4ccfc2ba082c7d5c5448a3668e9683"} Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.715571 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" podStartSLOduration=121.715548887 podStartE2EDuration="2m1.715548887s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:19.69694229 +0000 UTC m=+142.929450264" watchObservedRunningTime="2026-01-23 06:22:19.715548887 +0000 UTC m=+142.948056861" Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.727207 4784 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-h2hn7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.727282 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" podUID="709308c5-9977-4e05-98f0-b745c298db67" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.727385 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fsrlb"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.744275 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5st7s"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.749329 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2stcb"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.752718 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6zvvc"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.759766 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.765702 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-r4srv"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.766782 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm"] Jan 23 06:22:19 crc kubenswrapper[4784]: E0123 06:22:19.768611 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:20.268587822 +0000 UTC m=+143.501095806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.774814 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.818314 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.820611 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh"] Jan 23 06:22:19 crc kubenswrapper[4784]: W0123 06:22:19.821442 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod277df242_6850_47b2_af69_2e33cd07657b.slice/crio-c02352bae248c5c04b4de1ac6bf71503a2829f95ed13b205c2cf443f3e542343 WatchSource:0}: Error finding container c02352bae248c5c04b4de1ac6bf71503a2829f95ed13b205c2cf443f3e542343: Status 404 returned error can't find the container with id c02352bae248c5c04b4de1ac6bf71503a2829f95ed13b205c2cf443f3e542343 Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.821820 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" podStartSLOduration=121.82179462 podStartE2EDuration="2m1.82179462s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:19.797680083 +0000 UTC m=+143.030188057" watchObservedRunningTime="2026-01-23 06:22:19.82179462 +0000 UTC m=+143.054302614" Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.844415 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-czr5t"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.858103 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.863283 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:19 crc kubenswrapper[4784]: E0123 06:22:19.864847 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:20.364819581 +0000 UTC m=+143.597327735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.864938 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-p55ct"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.870392 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" podStartSLOduration=121.870372412 podStartE2EDuration="2m1.870372412s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:19.842492313 +0000 UTC m=+143.075000297" watchObservedRunningTime="2026-01-23 06:22:19.870372412 +0000 UTC m=+143.102880386" Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.871680 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.877829 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6q96w"] Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.882668 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn" podStartSLOduration=121.882641226 podStartE2EDuration="2m1.882641226s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:19.863346061 +0000 UTC m=+143.095854035" watchObservedRunningTime="2026-01-23 06:22:19.882641226 +0000 UTC m=+143.115149200" Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.931133 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dst6k"] Jan 23 06:22:19 crc kubenswrapper[4784]: W0123 06:22:19.963568 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd230e8a_2ec3_40e3_b964_66279c61bdfb.slice/crio-a6d6ff5bba4cfbf6ea346bd1661751ea2ea232e1d1144fbcecefd5f555106097 WatchSource:0}: Error finding container a6d6ff5bba4cfbf6ea346bd1661751ea2ea232e1d1144fbcecefd5f555106097: Status 404 returned error can't find the container with id a6d6ff5bba4cfbf6ea346bd1661751ea2ea232e1d1144fbcecefd5f555106097 Jan 23 06:22:19 crc kubenswrapper[4784]: I0123 06:22:19.964909 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:19 crc kubenswrapper[4784]: E0123 06:22:19.966713 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:20.466698154 +0000 UTC m=+143.699206128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:20 crc kubenswrapper[4784]: W0123 06:22:19.997954 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aafef89_45b3_4517_bc7c_f669580a3c1a.slice/crio-2b87cfcf1ce7b4f3d318cdb625ae270283f2e76a79a2adec18f481da46c5181b WatchSource:0}: Error finding container 2b87cfcf1ce7b4f3d318cdb625ae270283f2e76a79a2adec18f481da46c5181b: Status 404 returned error can't find the container with id 2b87cfcf1ce7b4f3d318cdb625ae270283f2e76a79a2adec18f481da46c5181b Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.066220 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:20 crc kubenswrapper[4784]: E0123 06:22:20.067239 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:20.567201911 +0000 UTC m=+143.799709885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.178007 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:20 crc kubenswrapper[4784]: E0123 06:22:20.178700 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:20.678678605 +0000 UTC m=+143.911186589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.249195 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:20 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:20 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:20 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.249275 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.280553 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:20 crc kubenswrapper[4784]: E0123 06:22:20.280805 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:20.780742054 +0000 UTC m=+144.013250028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.281191 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:20 crc kubenswrapper[4784]: E0123 06:22:20.281641 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:20.781624418 +0000 UTC m=+144.014132392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.307492 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.307566 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.382919 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:20 crc kubenswrapper[4784]: E0123 06:22:20.383192 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:20.883147002 +0000 UTC m=+144.115654986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.383308 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:20 crc kubenswrapper[4784]: E0123 06:22:20.383787 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:20.883779409 +0000 UTC m=+144.116287383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.483953 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:20 crc kubenswrapper[4784]: E0123 06:22:20.484315 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:20.984299296 +0000 UTC m=+144.216807270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.585804 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:20 crc kubenswrapper[4784]: E0123 06:22:20.586268 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:21.086252782 +0000 UTC m=+144.318760756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.686727 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:20 crc kubenswrapper[4784]: E0123 06:22:20.687042 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:21.186928392 +0000 UTC m=+144.419436366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.689036 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:20 crc kubenswrapper[4784]: E0123 06:22:20.689587 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:21.189570034 +0000 UTC m=+144.422078008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.753178 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv" event={"ID":"fffb502e-8e9d-4eaa-9132-e166d4ad1386","Type":"ContainerStarted","Data":"bcf2d4841c60028d081dc3b3ef6a17e6f1adc8c10ca803cda396147da9dc467c"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.758379 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" event={"ID":"e2ebc8cd-38f9-4337-a299-7faafe40b3c4","Type":"ContainerStarted","Data":"5f04b6d1a36c1eb649f658be754b64636523796ded7756b005cdcd1a033279ac"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.759835 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6zvvc" event={"ID":"120c77d9-d427-4c8c-87fb-4443fe6ee918","Type":"ContainerStarted","Data":"b8bd8b12e5db2dabee45a3aae2868e2f52cd762010239e1ccfc5ed0e133f6a39"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.770658 4784 generic.go:334] "Generic (PLEG): container finished" podID="f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39" containerID="c871567929cc6e43b2f235cc54ecd74836f68d9eab0e1bc70fe89f9c9903f506" exitCode=0 Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.770798 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" event={"ID":"f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39","Type":"ContainerDied","Data":"c871567929cc6e43b2f235cc54ecd74836f68d9eab0e1bc70fe89f9c9903f506"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.779387 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-p55ct" event={"ID":"01651938-c65b-4314-a29c-02aad47fc6be","Type":"ContainerStarted","Data":"bbbf899a329bc175b6dbfc04c4f3cf7c4d338d97342958c3d38c76e7645327d0"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.783429 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" event={"ID":"bcb815b1-b9f3-4af3-85a7-cf4162be8f7b","Type":"ContainerStarted","Data":"5e02362b088a985228f42207ee6164bfbf64ba777935d0041cabb7d732b6b20e"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.784301 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk" event={"ID":"ee95cb1e-738d-4e44-bcd9-978114c4e440","Type":"ContainerStarted","Data":"2da913a947721ea4c1add700232ccfd9cd595938170a486f27302277618c506c"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.788852 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6q96w" event={"ID":"409ee6bf-e36a-4a14-9223-32c726962eab","Type":"ContainerStarted","Data":"b105c09398179b40d98acfbdd797d403ae7ab4231ba072432ffa1306f6d5a6f8"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.793530 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:20 crc kubenswrapper[4784]: E0123 06:22:20.793984 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:21.293963847 +0000 UTC m=+144.526471821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.795029 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc" event={"ID":"3dbf9ccb-15be-4b0f-bf67-8638a57bb848","Type":"ContainerStarted","Data":"66bdf8d0c7299889e4814f72976bc5729f5c8cf3144b9383d0abeed361014b78"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.795700 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-czr5t" event={"ID":"75a8ac2c-f286-499e-9faa-03f25bc7f579","Type":"ContainerStarted","Data":"fea4f5d079b505606c4ebc0efa2729971d331ea21c7911eced39f68d5923c913"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.796569 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" event={"ID":"34f058be-8b3f-4835-aab4-ab7df5f787b0","Type":"ContainerStarted","Data":"5b1b9e917675b7b28b04181db5d846d5908124a2a4784abba8efc75736160818"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.797297 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" event={"ID":"4f6b7620-7eef-4758-8e27-44453c3925f9","Type":"ContainerStarted","Data":"046852b46b99c7857ebe0f3a6e3f4b7e7a507fcfadc0d5106be56bc309053772"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.801848 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" event={"ID":"86f3bcf7-f2f4-4ed0-aae2-be5f61657fac","Type":"ContainerStarted","Data":"f0240325aa4eab84af0f8babcd5a0df01a6975cf1c2f115c69128560c5ab0ea3"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.806388 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265" event={"ID":"3012e555-7659-4858-aa51-cb6ae6fa6a36","Type":"ContainerStarted","Data":"e8d909dd0a7bebc0ec8da629e4ad6adf640cb94247056a4f355418517f4d6347"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.807803 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7" event={"ID":"0db28f19-b83b-46a0-befb-1720ccd656bb","Type":"ContainerStarted","Data":"255c751b14e101223400ee384de7b516bb55cedb7c8c4c24f60e8ca299d0020e"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.808629 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5st7s" event={"ID":"d54c2ab7-ba8c-4e44-b4b5-cdb617753316","Type":"ContainerStarted","Data":"3d8bd4a25ef54830fb6395595ab321fdc4b11a9924ac562138bb9ca2e8039786"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.812450 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l7559" event={"ID":"52fb80ca-3a92-42b7-a9b6-7de2cb478603","Type":"ContainerStarted","Data":"97af24151a60897a315c04df8774fc7b8893265199f988c6ac7f9c8a4c19e0a5"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.813247 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" event={"ID":"0bdea249-8d22-4c90-81ed-0fd52338641e","Type":"ContainerStarted","Data":"2742e43930f4ff18142aa51c603c40eec35daacc10809f86bea1045fe7fb77a1"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.816854 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pwcvq" event={"ID":"fb1480f7-5616-46a7-a37f-479f33615b7f","Type":"ContainerStarted","Data":"5542522e13a86843b9dcb29b6577ac3c7e3a7daff783c864a6121fe2208f4393"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.818322 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm" event={"ID":"e03c05ee-79c1-492f-bc57-f4241be21623","Type":"ContainerStarted","Data":"496bc4572bbacaa20e9da355bb02f471fa4e8311b260f3ac22363e8c21ef8d59"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.819631 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" event={"ID":"277df242-6850-47b2-af69-2e33cd07657b","Type":"ContainerStarted","Data":"c02352bae248c5c04b4de1ac6bf71503a2829f95ed13b205c2cf443f3e542343"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.820453 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm" event={"ID":"cf4fdcc3-7a45-404d-ac8a-86700c1b401f","Type":"ContainerStarted","Data":"f3cc3bac987be17251e2a9cf36368bc3bf51ee8f2f42e06475f33f0243d6b681"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.821543 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xqdqx" event={"ID":"0a7c339b-1a18-4a89-ad41-889f28df7304","Type":"ContainerStarted","Data":"09560db2ad26f91eccddfc3967b67a23dca321fa6e380e8fe681c1f8906a95de"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.822652 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2stcb" event={"ID":"b6c8a935-b603-40f3-8051-c705e23c20f3","Type":"ContainerStarted","Data":"b981e5830f1a64fd52a46763c4df01a51479a325a7c9b3ad159b741c20ed218d"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.823538 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4srv" event={"ID":"97fee47e-af30-44f4-b7ce-c7277e65dc35","Type":"ContainerStarted","Data":"98ed1d54fe1a31967ab23ceed69ed25dac09cb4cc24807097b739be7732c0895"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.824799 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-65wmn" event={"ID":"ea1f6b10-9910-420e-96c7-cfd389d931c4","Type":"ContainerStarted","Data":"4037724655ceaa3c41082eae07a142411d55a4632b0f64ae0ba6203ecf7e2755"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.826933 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfwg8" event={"ID":"3aafef89-45b3-4517-bc7c-f669580a3c1a","Type":"ContainerStarted","Data":"2b87cfcf1ce7b4f3d318cdb625ae270283f2e76a79a2adec18f481da46c5181b"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.828107 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" event={"ID":"cd230e8a-2ec3-40e3-b964-66279c61bdfb","Type":"ContainerStarted","Data":"a6d6ff5bba4cfbf6ea346bd1661751ea2ea232e1d1144fbcecefd5f555106097"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.830007 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bb5s2" event={"ID":"b03d7aa3-b8a0-4725-b16d-908e50b963e4","Type":"ContainerStarted","Data":"66ec6b1c30fe68b933b38e6c6166e751159f9cc875d0f749afafadb0da609e1a"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.830952 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" event={"ID":"dc93f303-432c-4487-a225-f0af2fa5bd49","Type":"ContainerStarted","Data":"a122d69361dc583a6ddf8788ab5e015e46d98606cd5a471d2bbda0d9bcce972c"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.833812 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" event={"ID":"ba69339a-1102-4a25-ae4e-a70b643e6ff1","Type":"ContainerStarted","Data":"9081ddef763dd0e4cfb04753f2f7e3ed824ba7e0d4d39b53bfac4e15e93cdd02"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.837995 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-f8rrr" event={"ID":"bd2caef4-07a6-420b-80e0-c2f26b044bee","Type":"ContainerStarted","Data":"59566eac023e9e7f27560226a4831b2e38b432bcd1b4d4ccd8a61ca53f27eaf6"} Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.895320 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:20 crc kubenswrapper[4784]: E0123 06:22:20.895662 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:21.395648504 +0000 UTC m=+144.628156468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:20 crc kubenswrapper[4784]: I0123 06:22:20.994368 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.000649 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:21 crc kubenswrapper[4784]: E0123 06:22:21.002804 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:21.502785882 +0000 UTC m=+144.735293866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.035217 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-ltcmm" podStartSLOduration=123.035189224 podStartE2EDuration="2m3.035189224s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:20.88519457 +0000 UTC m=+144.117702534" watchObservedRunningTime="2026-01-23 06:22:21.035189224 +0000 UTC m=+144.267697198" Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.107774 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:21 crc kubenswrapper[4784]: E0123 06:22:21.108189 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:21.608177071 +0000 UTC m=+144.840685045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.209657 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:21 crc kubenswrapper[4784]: E0123 06:22:21.210619 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:21.710592359 +0000 UTC m=+144.943100333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.242983 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:21 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:21 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:21 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.243051 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.312761 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:21 crc kubenswrapper[4784]: E0123 06:22:21.313239 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:21.813217113 +0000 UTC m=+145.045725087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.413649 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:21 crc kubenswrapper[4784]: E0123 06:22:21.413902 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:21.913865884 +0000 UTC m=+145.146373858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.414025 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:21 crc kubenswrapper[4784]: E0123 06:22:21.414391 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:21.914383667 +0000 UTC m=+145.146891641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.516012 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:21 crc kubenswrapper[4784]: E0123 06:22:21.516393 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:22.016359544 +0000 UTC m=+145.248867518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.516471 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:21 crc kubenswrapper[4784]: E0123 06:22:21.516957 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:22.016934029 +0000 UTC m=+145.249442213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.617590 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:21 crc kubenswrapper[4784]: E0123 06:22:21.617792 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:22.117745613 +0000 UTC m=+145.350253587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.617986 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:21 crc kubenswrapper[4784]: E0123 06:22:21.618472 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:22.118461023 +0000 UTC m=+145.350968997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.719033 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:21 crc kubenswrapper[4784]: E0123 06:22:21.719400 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:22.219383021 +0000 UTC m=+145.451890995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.820798 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:21 crc kubenswrapper[4784]: E0123 06:22:21.821356 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:22.321331917 +0000 UTC m=+145.553839891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.844674 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xqdqx" event={"ID":"0a7c339b-1a18-4a89-ad41-889f28df7304","Type":"ContainerStarted","Data":"bbd3700dc3ee8c68e5abf500a38c205c676cb8f7a73947f3e0ed980aa517a594"} Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.846946 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" event={"ID":"dc93f303-432c-4487-a225-f0af2fa5bd49","Type":"ContainerStarted","Data":"d4600f59bb969bb390239da2a85643bf146a362b38c67f7da24229e4ef52f2bf"} Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.848562 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" event={"ID":"34f058be-8b3f-4835-aab4-ab7df5f787b0","Type":"ContainerStarted","Data":"a05273c3e605763700c3b2a55dd4adc5c0435f93bd528ed4276c0a8fac433a2e"} Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.851146 4784 generic.go:334] "Generic (PLEG): container finished" podID="51a0574a-18f3-4fea-b3c9-ed345668f240" containerID="a53eaccdd43e4b0e6529f07058c01c7409aafe70391864245afb5f612efe8457" exitCode=0 Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.851262 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" event={"ID":"51a0574a-18f3-4fea-b3c9-ed345668f240","Type":"ContainerDied","Data":"a53eaccdd43e4b0e6529f07058c01c7409aafe70391864245afb5f612efe8457"} Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.852999 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7" event={"ID":"0db28f19-b83b-46a0-befb-1720ccd656bb","Type":"ContainerStarted","Data":"95b8de5ee834377d09f4311b6eda0df8d7ff64c9470739cdfe6cf014114529ae"} Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.854219 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dst6k" event={"ID":"c7c1f7c1-2f12-473f-b782-750fa84c8b03","Type":"ContainerStarted","Data":"638d784f9df7949febecbb7079239c96b56279550e50ae0d388ab63994d3e7a5"} Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.859717 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bb5s2" event={"ID":"b03d7aa3-b8a0-4725-b16d-908e50b963e4","Type":"ContainerStarted","Data":"8f44e8dbc7b5fd572f4179dab97ef9047e25478a13ef285f736eb4330cedf7cc"} Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.868970 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-f8rrr" event={"ID":"bd2caef4-07a6-420b-80e0-c2f26b044bee","Type":"ContainerStarted","Data":"c7ed85c0f0bbb1c2e99110284433de20b2431b497d9a8310f869bc89fd12dd8a"} Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.872582 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc" event={"ID":"3dbf9ccb-15be-4b0f-bf67-8638a57bb848","Type":"ContainerStarted","Data":"37e15a4d2a15f1a588acc9b88c6ef68de15c0a6fa8512670bb65796ecfc13626"} Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.875386 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" event={"ID":"32f1325e-ec9d-4375-855d-970361b2ac03","Type":"ContainerStarted","Data":"c46b9dd2b84bdf2ba1696f23c98fb0446747f77b83e18ae910710aef5bb480b2"} Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.876446 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.883855 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vpgh7" podStartSLOduration=123.883833838 podStartE2EDuration="2m3.883833838s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:21.882925014 +0000 UTC m=+145.115432988" watchObservedRunningTime="2026-01-23 06:22:21.883833838 +0000 UTC m=+145.116341812" Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.893828 4784 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4wkg9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" start-of-body= Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.893917 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" podUID="32f1325e-ec9d-4375-855d-970361b2ac03" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.924979 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:21 crc kubenswrapper[4784]: E0123 06:22:21.925542 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:22.425519943 +0000 UTC m=+145.658027907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:21 crc kubenswrapper[4784]: I0123 06:22:21.933534 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" podStartSLOduration=123.933513231 podStartE2EDuration="2m3.933513231s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:21.932681838 +0000 UTC m=+145.165189812" watchObservedRunningTime="2026-01-23 06:22:21.933513231 +0000 UTC m=+145.166021205" Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.030591 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:22 crc kubenswrapper[4784]: E0123 06:22:22.035886 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:22.535866237 +0000 UTC m=+145.768374211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.132100 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:22 crc kubenswrapper[4784]: E0123 06:22:22.132332 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:22.632295343 +0000 UTC m=+145.864803317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.132821 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:22 crc kubenswrapper[4784]: E0123 06:22:22.133331 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:22.63331425 +0000 UTC m=+145.865822224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.234321 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:22 crc kubenswrapper[4784]: E0123 06:22:22.234715 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:22.73469602 +0000 UTC m=+145.967203994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.249505 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:22 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:22 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:22 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.249570 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.338277 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:22 crc kubenswrapper[4784]: E0123 06:22:22.338780 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:22.838739993 +0000 UTC m=+146.071248127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.438940 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:22 crc kubenswrapper[4784]: E0123 06:22:22.439228 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:22.939192187 +0000 UTC m=+146.171700161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.439670 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:22 crc kubenswrapper[4784]: E0123 06:22:22.440028 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:22.940013429 +0000 UTC m=+146.172521403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.541283 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:22 crc kubenswrapper[4784]: E0123 06:22:22.541769 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:23.041738089 +0000 UTC m=+146.274246063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.643280 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:22 crc kubenswrapper[4784]: E0123 06:22:22.643901 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:23.14387722 +0000 UTC m=+146.376385194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.745030 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:22 crc kubenswrapper[4784]: E0123 06:22:22.745249 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:23.245214108 +0000 UTC m=+146.477722082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.745320 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:22 crc kubenswrapper[4784]: E0123 06:22:22.745870 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:23.245853326 +0000 UTC m=+146.478361300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.846277 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:22 crc kubenswrapper[4784]: E0123 06:22:22.846816 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:23.346784064 +0000 UTC m=+146.579292028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.905299 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4srv" event={"ID":"97fee47e-af30-44f4-b7ce-c7277e65dc35","Type":"ContainerStarted","Data":"049033724eb0ed9892026eb5737f6b89b5d836c435f98721c8bd7676511242af"} Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.908323 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" event={"ID":"cd230e8a-2ec3-40e3-b964-66279c61bdfb","Type":"ContainerStarted","Data":"b757aaa3edcd7f6f8f627810d78b3b4955df8395f248cd617074730f9fb0c596"} Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.910957 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-p55ct" event={"ID":"01651938-c65b-4314-a29c-02aad47fc6be","Type":"ContainerStarted","Data":"450387b07fe4d8005bb71069ccd18e39841d1ecb8c267742bcad317e5bf774af"} Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.913450 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm" event={"ID":"e03c05ee-79c1-492f-bc57-f4241be21623","Type":"ContainerStarted","Data":"51460a5ffbbb14652ee317791def7f169920a288ca5ee6f1e3a8c3f0b2c8f408"} Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.915121 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" event={"ID":"277df242-6850-47b2-af69-2e33cd07657b","Type":"ContainerStarted","Data":"ea32541ef8b603c952be6e998d7ddf9a5a12fb51351434fb4bb82a5f4d11c531"} Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.915808 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.918228 4784 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sgngm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.918364 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" podUID="277df242-6850-47b2-af69-2e33cd07657b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.919306 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk" event={"ID":"ee95cb1e-738d-4e44-bcd9-978114c4e440","Type":"ContainerStarted","Data":"96d2770f0dbb3baf69772d2fa80aec672c3ab8cca972e5f8dad28983e487c13b"} Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.924281 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-czr5t" event={"ID":"75a8ac2c-f286-499e-9faa-03f25bc7f579","Type":"ContainerStarted","Data":"900aaff1209fa4c4ea1e6d3e98100b3aa7e4e842ef5865c3977b90057107bb2d"} Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.928225 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" event={"ID":"e2ebc8cd-38f9-4337-a299-7faafe40b3c4","Type":"ContainerStarted","Data":"cb36512342348ce49ff13544d921a82efe4fa86ae3dc52a3acb2fee1cbede341"} Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.933292 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4srv" podStartSLOduration=124.933263589 podStartE2EDuration="2m4.933263589s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:22.929480106 +0000 UTC m=+146.161988080" watchObservedRunningTime="2026-01-23 06:22:22.933263589 +0000 UTC m=+146.165771563" Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.934593 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" event={"ID":"0bdea249-8d22-4c90-81ed-0fd52338641e","Type":"ContainerStarted","Data":"4aa8072bee8a7924f9271e1f4a7fac47102fc8a016b6f3c3aa7093e7e76869a6"} Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.938910 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2stcb" event={"ID":"b6c8a935-b603-40f3-8051-c705e23c20f3","Type":"ContainerStarted","Data":"4731e6c21064788a257b5c1b044b1d035d18e1063df4a71aec4a44d863f42d2b"} Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.957623 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" event={"ID":"bcb815b1-b9f3-4af3-85a7-cf4162be8f7b","Type":"ContainerStarted","Data":"9517153f92c0641f4ec6e7280fdc296a06bed71a6720922fb114752516b91776"} Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.957690 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.960049 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.960893 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:22 crc kubenswrapper[4784]: E0123 06:22:22.966706 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:23.466682569 +0000 UTC m=+146.699190543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.967290 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bb5s2" Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.969497 4784 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb5s2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.969576 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.970718 4784 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-n9rnn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.970779 4784 patch_prober.go:28] interesting pod/console-operator-58897d9998-xqdqx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.970841 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xqdqx" podUID="0a7c339b-1a18-4a89-ad41-889f28df7304" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.970856 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" podUID="dc93f303-432c-4487-a225-f0af2fa5bd49" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 23 06:22:22 crc kubenswrapper[4784]: I0123 06:22:22.999227 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-czr5t" podStartSLOduration=124.999195994 podStartE2EDuration="2m4.999195994s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:22.956216233 +0000 UTC m=+146.188724217" watchObservedRunningTime="2026-01-23 06:22:22.999195994 +0000 UTC m=+146.231703968" Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.002007 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" podStartSLOduration=125.00199999 podStartE2EDuration="2m5.00199999s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:22.986353744 +0000 UTC m=+146.218861738" watchObservedRunningTime="2026-01-23 06:22:23.00199999 +0000 UTC m=+146.234507964" Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.068315 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:23 crc kubenswrapper[4784]: E0123 06:22:23.073724 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:23.573681761 +0000 UTC m=+146.806189735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.074352 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" podStartSLOduration=125.074304109 podStartE2EDuration="2m5.074304109s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:23.064944644 +0000 UTC m=+146.297452638" watchObservedRunningTime="2026-01-23 06:22:23.074304109 +0000 UTC m=+146.306812083" Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.171461 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:23 crc kubenswrapper[4784]: E0123 06:22:23.172296 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:23.672279275 +0000 UTC m=+146.904787249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.184876 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-h8trm" podStartSLOduration=125.184844257 podStartE2EDuration="2m5.184844257s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:23.135733 +0000 UTC m=+146.368240974" watchObservedRunningTime="2026-01-23 06:22:23.184844257 +0000 UTC m=+146.417352231" Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.250236 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:23 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:23 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:23 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.250357 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.273978 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:23 crc kubenswrapper[4784]: E0123 06:22:23.274522 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:23.774482798 +0000 UTC m=+147.006990772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.274711 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:23 crc kubenswrapper[4784]: E0123 06:22:23.275384 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:23.775369482 +0000 UTC m=+147.007877456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.282988 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-fsrlb" podStartSLOduration=125.282958859 podStartE2EDuration="2m5.282958859s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:23.281925061 +0000 UTC m=+146.514433045" watchObservedRunningTime="2026-01-23 06:22:23.282958859 +0000 UTC m=+146.515466833" Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.284066 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-p55ct" podStartSLOduration=10.284055818 podStartE2EDuration="10.284055818s" podCreationTimestamp="2026-01-23 06:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:23.190243404 +0000 UTC m=+146.422751378" watchObservedRunningTime="2026-01-23 06:22:23.284055818 +0000 UTC m=+146.516563792" Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.325921 4784 patch_prober.go:28] interesting pod/apiserver-76f77b778f-hhgpf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 06:22:23 crc kubenswrapper[4784]: [+]log ok Jan 23 06:22:23 crc kubenswrapper[4784]: [+]etcd ok Jan 23 06:22:23 crc kubenswrapper[4784]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 06:22:23 crc kubenswrapper[4784]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 06:22:23 crc kubenswrapper[4784]: [+]poststarthook/max-in-flight-filter ok Jan 23 06:22:23 crc kubenswrapper[4784]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 06:22:23 crc kubenswrapper[4784]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 23 06:22:23 crc kubenswrapper[4784]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 23 06:22:23 crc kubenswrapper[4784]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 23 06:22:23 crc kubenswrapper[4784]: [+]poststarthook/project.openshift.io-projectcache ok Jan 23 06:22:23 crc kubenswrapper[4784]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 23 06:22:23 crc kubenswrapper[4784]: [+]poststarthook/openshift.io-startinformers ok Jan 23 06:22:23 crc kubenswrapper[4784]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 23 06:22:23 crc kubenswrapper[4784]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 06:22:23 crc kubenswrapper[4784]: livez check failed Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.326028 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" podUID="4cbb22dd-2c0b-4be3-80b5-affe170bb787" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.383985 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:23 crc kubenswrapper[4784]: E0123 06:22:23.384204 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:23.884162504 +0000 UTC m=+147.116670478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.384374 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:23 crc kubenswrapper[4784]: E0123 06:22:23.384893 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:23.884884323 +0000 UTC m=+147.117392297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.499370 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:23 crc kubenswrapper[4784]: E0123 06:22:23.500235 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:24.000208524 +0000 UTC m=+147.232716498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.550920 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-2stcb" podStartSLOduration=125.550891703 podStartE2EDuration="2m5.550891703s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:23.400931841 +0000 UTC m=+146.633439815" watchObservedRunningTime="2026-01-23 06:22:23.550891703 +0000 UTC m=+146.783399677" Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.604281 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.604329 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.604402 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:22:23 crc kubenswrapper[4784]: E0123 06:22:23.604922 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:24.104902183 +0000 UTC m=+147.337410157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.611677 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bb5s2" podStartSLOduration=125.611656367 podStartE2EDuration="2m5.611656367s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:23.556965618 +0000 UTC m=+146.789473592" watchObservedRunningTime="2026-01-23 06:22:23.611656367 +0000 UTC m=+146.844164341" Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.697896 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kv2z9" podStartSLOduration=125.697871135 podStartE2EDuration="2m5.697871135s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:23.652265883 +0000 UTC m=+146.884773857" watchObservedRunningTime="2026-01-23 06:22:23.697871135 +0000 UTC m=+146.930379119" Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.699945 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jbckc" podStartSLOduration=125.69993656 podStartE2EDuration="2m5.69993656s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:23.696444086 +0000 UTC m=+146.928952070" watchObservedRunningTime="2026-01-23 06:22:23.69993656 +0000 UTC m=+146.932444534" Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.705336 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:23 crc kubenswrapper[4784]: E0123 06:22:23.705859 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:24.205831452 +0000 UTC m=+147.438339426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.753219 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" podStartSLOduration=125.75318577 podStartE2EDuration="2m5.75318577s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:23.749633164 +0000 UTC m=+146.982141148" watchObservedRunningTime="2026-01-23 06:22:23.75318577 +0000 UTC m=+146.985693744" Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.794633 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xqdqx" podStartSLOduration=125.794606808 podStartE2EDuration="2m5.794606808s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:23.791551415 +0000 UTC m=+147.024059409" watchObservedRunningTime="2026-01-23 06:22:23.794606808 +0000 UTC m=+147.027114782" Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.806820 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:23 crc kubenswrapper[4784]: E0123 06:22:23.807466 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:24.307435567 +0000 UTC m=+147.539943541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.967100 4784 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4wkg9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.967199 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.967207 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" podUID="32f1325e-ec9d-4375-855d-970361b2ac03" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 06:22:23 crc kubenswrapper[4784]: E0123 06:22:23.967597 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:24.467559866 +0000 UTC m=+147.700067830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:23 crc kubenswrapper[4784]: I0123 06:22:23.967786 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:23 crc kubenswrapper[4784]: E0123 06:22:23.968298 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:24.468276026 +0000 UTC m=+147.700784200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.068634 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:24 crc kubenswrapper[4784]: E0123 06:22:24.069310 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:24.569287976 +0000 UTC m=+147.801795950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.071471 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" event={"ID":"4f6b7620-7eef-4758-8e27-44453c3925f9","Type":"ContainerStarted","Data":"4483980a416c6b67f654e3bee859275f04ed5cc597ac7ca6ee12af375da91e76"} Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.072199 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.074278 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" event={"ID":"f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39","Type":"ContainerStarted","Data":"3398e4137ed174ac67ee227ed67aa7b4493d6f22563e73b414fe06d7b594d369"} Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.074732 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.075210 4784 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mtnbf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.075280 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" podUID="4f6b7620-7eef-4758-8e27-44453c3925f9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.093983 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6q96w" event={"ID":"409ee6bf-e36a-4a14-9223-32c726962eab","Type":"ContainerStarted","Data":"4731a4d04d2bec02a38750a59ee78a07f212a2746a3c2b54b506fdf6cda7d4a2"} Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.104928 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5st7s" event={"ID":"d54c2ab7-ba8c-4e44-b4b5-cdb617753316","Type":"ContainerStarted","Data":"62ffcbba8bda9f786a755438b9dd6f05caadd6a2989b024dccdc456f4fb0f435"} Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.141538 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265" event={"ID":"3012e555-7659-4858-aa51-cb6ae6fa6a36","Type":"ContainerStarted","Data":"fa1863f91be661730fdf108cce9c11ff2e0ecffef893674d583f11dacde864ab"} Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.153394 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv" event={"ID":"fffb502e-8e9d-4eaa-9132-e166d4ad1386","Type":"ContainerStarted","Data":"a1bffd5ef31f4cf338454db880777110a7b03aa4d07b944f132811dda92617fd"} Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.185379 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:24 crc kubenswrapper[4784]: E0123 06:22:24.185835 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:24.685820198 +0000 UTC m=+147.918328172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.222953 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" event={"ID":"34f058be-8b3f-4835-aab4-ab7df5f787b0","Type":"ContainerStarted","Data":"78bb6b15dd935e1f718bc3be69dc5b7874cd4a093daf592a9e2c8adba8c3c49a"} Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.245793 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" podStartSLOduration=126.245771891 podStartE2EDuration="2m6.245771891s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:24.243410127 +0000 UTC m=+147.475918101" watchObservedRunningTime="2026-01-23 06:22:24.245771891 +0000 UTC m=+147.478279865" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.288515 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.288897 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:24 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:24 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:24 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.288990 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:24 crc kubenswrapper[4784]: E0123 06:22:24.290395 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:24.790362695 +0000 UTC m=+148.022870829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.317915 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6zvvc" event={"ID":"120c77d9-d427-4c8c-87fb-4443fe6ee918","Type":"ContainerStarted","Data":"9715059e64b5af5841d633d657ce78b4b233d8c607ebf6c5b8a4305dacd267dd"} Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.361132 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" podStartSLOduration=126.361099661 podStartE2EDuration="2m6.361099661s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:24.299880294 +0000 UTC m=+147.532388268" watchObservedRunningTime="2026-01-23 06:22:24.361099661 +0000 UTC m=+147.593607635" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.398121 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:24 crc kubenswrapper[4784]: E0123 06:22:24.419310 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:24.919282364 +0000 UTC m=+148.151790338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.462668 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w9xbs" podStartSLOduration=126.462644865 podStartE2EDuration="2m6.462644865s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:24.461485723 +0000 UTC m=+147.693993697" watchObservedRunningTime="2026-01-23 06:22:24.462644865 +0000 UTC m=+147.695152839" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.462902 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm" event={"ID":"cf4fdcc3-7a45-404d-ac8a-86700c1b401f","Type":"ContainerStarted","Data":"f4a804d5517f451100a7002cf631141832856e047f41055376b1187719fad45f"} Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.506028 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:24 crc kubenswrapper[4784]: E0123 06:22:24.506556 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:25.006523759 +0000 UTC m=+148.239031743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.528128 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfwg8" event={"ID":"3aafef89-45b3-4517-bc7c-f669580a3c1a","Type":"ContainerStarted","Data":"a3b3ec4a425bc522ea67791ce95aa98264b9a90885559b1bd6ba20b1cfa0773a"} Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.629267 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk" event={"ID":"ee95cb1e-738d-4e44-bcd9-978114c4e440","Type":"ContainerStarted","Data":"ef4adbf83101898a35b1ed58d96a98684f394f508ae2bb75459f8bbf6de3e729"} Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.629709 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:24 crc kubenswrapper[4784]: E0123 06:22:24.630816 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:25.130794003 +0000 UTC m=+148.363302167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.635260 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-f8rrr" event={"ID":"bd2caef4-07a6-420b-80e0-c2f26b044bee","Type":"ContainerStarted","Data":"7361b26bec8b749380f1df0189644270d84d33c150950fce1c6ad35681bacaf9"} Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.640227 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dst6k" event={"ID":"c7c1f7c1-2f12-473f-b782-750fa84c8b03","Type":"ContainerStarted","Data":"8bbd75a9af5bdcbd7676bfb203a0165ef5b9153d0905632848cd7c26350bcd13"} Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.664661 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" event={"ID":"86f3bcf7-f2f4-4ed0-aae2-be5f61657fac","Type":"ContainerStarted","Data":"a6758cb7e9557c9953b6724d57aaaf9bd24ea6181f73aa28a1d7fbd9055be8e1"} Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.665563 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.680653 4784 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sk4wc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.680739 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" podUID="86f3bcf7-f2f4-4ed0-aae2-be5f61657fac" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.684028 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l7559" event={"ID":"52fb80ca-3a92-42b7-a9b6-7de2cb478603","Type":"ContainerStarted","Data":"71ca04a6a04b630bec30f55fc73b74b40c0bbbef62970d518b924fde1463925f"} Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.690359 4784 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-n9rnn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.690405 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" podUID="dc93f303-432c-4487-a225-f0af2fa5bd49" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.692960 4784 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb5s2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.693039 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.709050 4784 patch_prober.go:28] interesting pod/console-operator-58897d9998-xqdqx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.709125 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xqdqx" podUID="0a7c339b-1a18-4a89-ad41-889f28df7304" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.729234 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.730785 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:24 crc kubenswrapper[4784]: E0123 06:22:24.732251 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:25.232233975 +0000 UTC m=+148.464741949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.755973 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4fwv" podStartSLOduration=126.75594614 podStartE2EDuration="2m6.75594614s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:24.548249085 +0000 UTC m=+147.780757059" watchObservedRunningTime="2026-01-23 06:22:24.75594614 +0000 UTC m=+147.988454114" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.757904 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" podStartSLOduration=126.757890273 podStartE2EDuration="2m6.757890273s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:24.757522013 +0000 UTC m=+147.990029997" watchObservedRunningTime="2026-01-23 06:22:24.757890273 +0000 UTC m=+147.990398247" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.816584 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fzjwm" podStartSLOduration=126.81655317 podStartE2EDuration="2m6.81655317s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:24.813367124 +0000 UTC m=+148.045875098" watchObservedRunningTime="2026-01-23 06:22:24.81655317 +0000 UTC m=+148.049061144" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.866877 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:24 crc kubenswrapper[4784]: E0123 06:22:24.870780 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:25.370733875 +0000 UTC m=+148.603241849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.952775 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-6zvvc" podStartSLOduration=126.952738167 podStartE2EDuration="2m6.952738167s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:24.951511144 +0000 UTC m=+148.184019128" watchObservedRunningTime="2026-01-23 06:22:24.952738167 +0000 UTC m=+148.185246141" Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.968058 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:24 crc kubenswrapper[4784]: E0123 06:22:24.968554 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:25.468529707 +0000 UTC m=+148.701037681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:24 crc kubenswrapper[4784]: I0123 06:22:24.992049 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" podStartSLOduration=126.991737729 podStartE2EDuration="2m6.991737729s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:24.989062117 +0000 UTC m=+148.221570091" watchObservedRunningTime="2026-01-23 06:22:24.991737729 +0000 UTC m=+148.224245703" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.058057 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-f8rrr" podStartSLOduration=127.058039314 podStartE2EDuration="2m7.058039314s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:25.056999246 +0000 UTC m=+148.289507230" watchObservedRunningTime="2026-01-23 06:22:25.058039314 +0000 UTC m=+148.290547278" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.070683 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:25 crc kubenswrapper[4784]: E0123 06:22:25.071350 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:25.571330256 +0000 UTC m=+148.803838240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.093146 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.112558 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wn4qk" podStartSLOduration=127.112531158 podStartE2EDuration="2m7.112531158s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:25.111439598 +0000 UTC m=+148.343947572" watchObservedRunningTime="2026-01-23 06:22:25.112531158 +0000 UTC m=+148.345039132" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.172359 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:25 crc kubenswrapper[4784]: E0123 06:22:25.172723 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:25.672703126 +0000 UTC m=+148.905211100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.258700 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:25 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:25 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:25 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.258778 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.275523 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:25 crc kubenswrapper[4784]: E0123 06:22:25.276692 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:25.776674186 +0000 UTC m=+149.009182160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.367344 4784 patch_prober.go:28] interesting pod/apiserver-76f77b778f-hhgpf container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 06:22:25 crc kubenswrapper[4784]: [+]log ok Jan 23 06:22:25 crc kubenswrapper[4784]: [+]etcd ok Jan 23 06:22:25 crc kubenswrapper[4784]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 06:22:25 crc kubenswrapper[4784]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 06:22:25 crc kubenswrapper[4784]: [+]poststarthook/max-in-flight-filter ok Jan 23 06:22:25 crc kubenswrapper[4784]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 06:22:25 crc kubenswrapper[4784]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 23 06:22:25 crc kubenswrapper[4784]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 23 06:22:25 crc kubenswrapper[4784]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 23 06:22:25 crc kubenswrapper[4784]: [+]poststarthook/project.openshift.io-projectcache ok Jan 23 06:22:25 crc kubenswrapper[4784]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 23 06:22:25 crc kubenswrapper[4784]: [+]poststarthook/openshift.io-startinformers ok Jan 23 06:22:25 crc kubenswrapper[4784]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 23 06:22:25 crc kubenswrapper[4784]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 06:22:25 crc kubenswrapper[4784]: livez check failed Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.368077 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" podUID="4cbb22dd-2c0b-4be3-80b5-affe170bb787" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.383707 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.384023 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.384099 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.384206 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.384240 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:22:25 crc kubenswrapper[4784]: E0123 06:22:25.388477 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:25.888442599 +0000 UTC m=+149.120950573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.388859 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.401041 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.408159 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.413655 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.485733 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:25 crc kubenswrapper[4784]: E0123 06:22:25.486632 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:25.986609312 +0000 UTC m=+149.219117286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.587462 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:25 crc kubenswrapper[4784]: E0123 06:22:25.587689 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:26.087650393 +0000 UTC m=+149.320158367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.587943 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:25 crc kubenswrapper[4784]: E0123 06:22:25.588451 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:26.088438514 +0000 UTC m=+149.320946678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.592518 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.602167 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.614894 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.690085 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:25 crc kubenswrapper[4784]: E0123 06:22:25.690432 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:26.190409041 +0000 UTC m=+149.422917015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.726514 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5st7s" event={"ID":"d54c2ab7-ba8c-4e44-b4b5-cdb617753316","Type":"ContainerStarted","Data":"f79db8c1f1795ca7831ede943a97b415eb17c5f2749e237836951bdaa88defe1"} Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.727683 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5st7s" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.763220 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265" event={"ID":"3012e555-7659-4858-aa51-cb6ae6fa6a36","Type":"ContainerStarted","Data":"ad74e67a6053477947e5ecaf4a3f1bff9e9c81ebd19792496b68d72e0c2a6f2c"} Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.764505 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.792371 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:25 crc kubenswrapper[4784]: E0123 06:22:25.794736 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:26.29472182 +0000 UTC m=+149.527229794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.860846 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xhsh" event={"ID":"0bdea249-8d22-4c90-81ed-0fd52338641e","Type":"ContainerStarted","Data":"9940a2185fe2de231b187f9ecf870eb345b0aada62933e0bad855833b87da747"} Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.873346 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" event={"ID":"51a0574a-18f3-4fea-b3c9-ed345668f240","Type":"ContainerStarted","Data":"74da9cb7c955ca9bff3ee95668d8d0b5e9b98376472c35c362357af139a99546"} Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.893995 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:25 crc kubenswrapper[4784]: E0123 06:22:25.895103 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:26.395080353 +0000 UTC m=+149.627588327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.900643 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265" podStartSLOduration=127.900623744 podStartE2EDuration="2m7.900623744s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:25.897779956 +0000 UTC m=+149.130287940" watchObservedRunningTime="2026-01-23 06:22:25.900623744 +0000 UTC m=+149.133131718" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.901681 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5st7s" podStartSLOduration=12.901674512 podStartE2EDuration="12.901674512s" podCreationTimestamp="2026-01-23 06:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:25.795133792 +0000 UTC m=+149.027641766" watchObservedRunningTime="2026-01-23 06:22:25.901674512 +0000 UTC m=+149.134182486" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.925714 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dst6k" event={"ID":"c7c1f7c1-2f12-473f-b782-750fa84c8b03","Type":"ContainerStarted","Data":"da87b491cc4d462ebe07c2772c910e626566a455fc0620e2bd6ca7f2e03c03bc"} Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.947618 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" podStartSLOduration=127.947591883 podStartE2EDuration="2m7.947591883s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:25.946804731 +0000 UTC m=+149.179312705" watchObservedRunningTime="2026-01-23 06:22:25.947591883 +0000 UTC m=+149.180099857" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.954664 4784 generic.go:334] "Generic (PLEG): container finished" podID="cd230e8a-2ec3-40e3-b964-66279c61bdfb" containerID="b757aaa3edcd7f6f8f627810d78b3b4955df8395f248cd617074730f9fb0c596" exitCode=0 Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.954799 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" event={"ID":"cd230e8a-2ec3-40e3-b964-66279c61bdfb","Type":"ContainerDied","Data":"b757aaa3edcd7f6f8f627810d78b3b4955df8395f248cd617074730f9fb0c596"} Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.956946 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l7559" event={"ID":"52fb80ca-3a92-42b7-a9b6-7de2cb478603","Type":"ContainerStarted","Data":"daf129b04169a7a3bb0b9a3fe99c357609b50bcc0bae6995288d4c22ccb82b26"} Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.959483 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfwg8" event={"ID":"3aafef89-45b3-4517-bc7c-f669580a3c1a","Type":"ContainerStarted","Data":"170128bf7f6d1f84362cf4f03aec7a6341c6021f9b159ebab94e10deae812c86"} Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.993524 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dst6k" podStartSLOduration=127.993501342 podStartE2EDuration="2m7.993501342s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:25.988733062 +0000 UTC m=+149.221241056" watchObservedRunningTime="2026-01-23 06:22:25.993501342 +0000 UTC m=+149.226009346" Jan 23 06:22:25 crc kubenswrapper[4784]: I0123 06:22:25.995171 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:25 crc kubenswrapper[4784]: E0123 06:22:25.996200 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:26.496167594 +0000 UTC m=+149.728675738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.020985 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6q96w" event={"ID":"409ee6bf-e36a-4a14-9223-32c726962eab","Type":"ContainerStarted","Data":"40948c6ca65b76fedfa1df4ed89b88177dd9dd69e7d4cd2f5a253efe1767f9f4"} Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.048950 4784 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-skjzx container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.049017 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" podUID="f0fc1f34-a4c1-4c8f-a4f1-cdbb984b3c39" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.049600 4784 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sk4wc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.049680 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" podUID="86f3bcf7-f2f4-4ed0-aae2-be5f61657fac" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.087661 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-l7559" podStartSLOduration=128.087640945 podStartE2EDuration="2m8.087640945s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:26.086564466 +0000 UTC m=+149.319072440" watchObservedRunningTime="2026-01-23 06:22:26.087640945 +0000 UTC m=+149.320148919" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.138455 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:26 crc kubenswrapper[4784]: E0123 06:22:26.139480 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:26.639463706 +0000 UTC m=+149.871971680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.173656 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mtnbf" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.248901 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.250548 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:26 crc kubenswrapper[4784]: E0123 06:22:26.251020 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:26.751003252 +0000 UTC m=+149.983511226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.273405 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:26 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:26 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:26 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.273482 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.294035 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfwg8" podStartSLOduration=128.293995143 podStartE2EDuration="2m8.293995143s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:26.248123354 +0000 UTC m=+149.480631328" watchObservedRunningTime="2026-01-23 06:22:26.293995143 +0000 UTC m=+149.526503117" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.353439 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:26 crc kubenswrapper[4784]: E0123 06:22:26.355501 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:26.855470587 +0000 UTC m=+150.087978581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.458966 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:26 crc kubenswrapper[4784]: E0123 06:22:26.459859 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:26.959842919 +0000 UTC m=+150.192350893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.470518 4784 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb5s2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.470572 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.470919 4784 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb5s2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.470947 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.564692 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:26 crc kubenswrapper[4784]: E0123 06:22:26.565011 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:27.064994141 +0000 UTC m=+150.297502125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.668923 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:26 crc kubenswrapper[4784]: E0123 06:22:26.669448 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:27.169430454 +0000 UTC m=+150.401938428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.680681 4784 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-n9rnn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.680816 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" podUID="dc93f303-432c-4487-a225-f0af2fa5bd49" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.680942 4784 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-n9rnn container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.681030 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" podUID="dc93f303-432c-4487-a225-f0af2fa5bd49" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.787669 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:26 crc kubenswrapper[4784]: E0123 06:22:26.788399 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:27.288370582 +0000 UTC m=+150.520878556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.799029 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.799696 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.803065 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.803208 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.825830 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.854536 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.854609 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.860249 4784 patch_prober.go:28] interesting pod/console-f9d7485db-2stcb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.860333 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2stcb" podUID="b6c8a935-b603-40f3-8051-c705e23c20f3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.880874 4784 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sk4wc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.880919 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" podUID="86f3bcf7-f2f4-4ed0-aae2-be5f61657fac" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.880996 4784 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sk4wc container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.881009 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" podUID="86f3bcf7-f2f4-4ed0-aae2-be5f61657fac" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.890486 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:26 crc kubenswrapper[4784]: E0123 06:22:26.890800 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:27.39078215 +0000 UTC m=+150.623290124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.993621 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.994010 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4dc8b2e-1307-4725-b9cf-0b538251378a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f4dc8b2e-1307-4725-b9cf-0b538251378a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:22:26 crc kubenswrapper[4784]: I0123 06:22:26.994179 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4dc8b2e-1307-4725-b9cf-0b538251378a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f4dc8b2e-1307-4725-b9cf-0b538251378a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:22:26 crc kubenswrapper[4784]: E0123 06:22:26.995463 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:27.495435539 +0000 UTC m=+150.727943523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.029246 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f21d3246fb41c063d022d7128f4e5bfbf2f6457d6cce61b7e998aa8483857b88"} Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.031147 4784 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sk4wc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.031203 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" podUID="86f3bcf7-f2f4-4ed0-aae2-be5f61657fac" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.096637 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4dc8b2e-1307-4725-b9cf-0b538251378a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f4dc8b2e-1307-4725-b9cf-0b538251378a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.096712 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.096768 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4dc8b2e-1307-4725-b9cf-0b538251378a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f4dc8b2e-1307-4725-b9cf-0b538251378a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.096945 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4dc8b2e-1307-4725-b9cf-0b538251378a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f4dc8b2e-1307-4725-b9cf-0b538251378a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:22:27 crc kubenswrapper[4784]: E0123 06:22:27.097456 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:27.597430396 +0000 UTC m=+150.829938550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.140823 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4dc8b2e-1307-4725-b9cf-0b538251378a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f4dc8b2e-1307-4725-b9cf-0b538251378a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.197674 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:27 crc kubenswrapper[4784]: E0123 06:22:27.197850 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:27.69783533 +0000 UTC m=+150.930343304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.198322 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:27 crc kubenswrapper[4784]: E0123 06:22:27.200584 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:27.700534804 +0000 UTC m=+150.933042778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.259054 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:27 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:27 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:27 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.259144 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.304315 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:27 crc kubenswrapper[4784]: E0123 06:22:27.304698 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:27.804675309 +0000 UTC m=+151.037183283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.406865 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:27 crc kubenswrapper[4784]: E0123 06:22:27.407645 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:22:27.907628601 +0000 UTC m=+151.140136575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fzmbh" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.419152 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.450496 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xqdqx" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.459294 4784 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.515829 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:27 crc kubenswrapper[4784]: E0123 06:22:27.516199 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:22:28.016183697 +0000 UTC m=+151.248691671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.545865 4784 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-23T06:22:27.459318888Z","Handler":null,"Name":""} Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.571946 4784 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.572010 4784 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.605636 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.606636 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.634608 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.635276 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.635954 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.650347 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.707492 4784 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.708023 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.737436 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daa71dd0-d394-43db-9bef-e504248bdd60-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"daa71dd0-d394-43db-9bef-e504248bdd60\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.737584 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/daa71dd0-d394-43db-9bef-e504248bdd60-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"daa71dd0-d394-43db-9bef-e504248bdd60\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.841930 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/daa71dd0-d394-43db-9bef-e504248bdd60-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"daa71dd0-d394-43db-9bef-e504248bdd60\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.842097 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daa71dd0-d394-43db-9bef-e504248bdd60-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"daa71dd0-d394-43db-9bef-e504248bdd60\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.842616 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/daa71dd0-d394-43db-9bef-e504248bdd60-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"daa71dd0-d394-43db-9bef-e504248bdd60\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.918377 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fzmbh\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.978546 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daa71dd0-d394-43db-9bef-e504248bdd60-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"daa71dd0-d394-43db-9bef-e504248bdd60\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:22:27 crc kubenswrapper[4784]: I0123 06:22:27.979445 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.051156 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.051990 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.102062 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"cf123ad80e61927993d616d473af7d05fb315a4b33cabe09c69acb4cd072bfcb"} Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.104215 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6q96w" event={"ID":"409ee6bf-e36a-4a14-9223-32c726962eab","Type":"ContainerStarted","Data":"4a7034cc3889d14551bd31835450a9af2f65d5a4fb2086ede3d985a51b56a4a0"} Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.111553 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0de4571fb45fc755fbbadebf5cfe4313f6295d6c6828f4ae0510992cc3e0a232"} Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.133166 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.133306 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6" event={"ID":"cd230e8a-2ec3-40e3-b964-66279c61bdfb","Type":"ContainerDied","Data":"a6d6ff5bba4cfbf6ea346bd1661751ea2ea232e1d1144fbcecefd5f555106097"} Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.133333 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6d6ff5bba4cfbf6ea346bd1661751ea2ea232e1d1144fbcecefd5f555106097" Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.155664 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd230e8a-2ec3-40e3-b964-66279c61bdfb-config-volume\") pod \"cd230e8a-2ec3-40e3-b964-66279c61bdfb\" (UID: \"cd230e8a-2ec3-40e3-b964-66279c61bdfb\") " Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.156079 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvp4w\" (UniqueName: \"kubernetes.io/projected/cd230e8a-2ec3-40e3-b964-66279c61bdfb-kube-api-access-dvp4w\") pod \"cd230e8a-2ec3-40e3-b964-66279c61bdfb\" (UID: \"cd230e8a-2ec3-40e3-b964-66279c61bdfb\") " Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.156233 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd230e8a-2ec3-40e3-b964-66279c61bdfb-secret-volume\") pod \"cd230e8a-2ec3-40e3-b964-66279c61bdfb\" (UID: \"cd230e8a-2ec3-40e3-b964-66279c61bdfb\") " Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.156993 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd230e8a-2ec3-40e3-b964-66279c61bdfb-config-volume" (OuterVolumeSpecName: "config-volume") pod "cd230e8a-2ec3-40e3-b964-66279c61bdfb" (UID: "cd230e8a-2ec3-40e3-b964-66279c61bdfb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.189047 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd230e8a-2ec3-40e3-b964-66279c61bdfb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cd230e8a-2ec3-40e3-b964-66279c61bdfb" (UID: "cd230e8a-2ec3-40e3-b964-66279c61bdfb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.225249 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.231259 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-skjzx" Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.236013 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd230e8a-2ec3-40e3-b964-66279c61bdfb-kube-api-access-dvp4w" (OuterVolumeSpecName: "kube-api-access-dvp4w") pod "cd230e8a-2ec3-40e3-b964-66279c61bdfb" (UID: "cd230e8a-2ec3-40e3-b964-66279c61bdfb"). InnerVolumeSpecName "kube-api-access-dvp4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.249618 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:28 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:28 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:28 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.249698 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.261978 4784 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd230e8a-2ec3-40e3-b964-66279c61bdfb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.262026 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvp4w\" (UniqueName: \"kubernetes.io/projected/cd230e8a-2ec3-40e3-b964-66279c61bdfb-kube-api-access-dvp4w\") on node \"crc\" DevicePath \"\"" Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.262070 4784 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd230e8a-2ec3-40e3-b964-66279c61bdfb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.357436 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 06:22:28 crc kubenswrapper[4784]: E0123 06:22:28.751424 4784 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd230e8a_2ec3_40e3_b964_66279c61bdfb.slice\": RecentStats: unable to find data in memory cache]" Jan 23 06:22:28 crc kubenswrapper[4784]: I0123 06:22:28.932079 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.162571 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0580a0c695ea7c54db4165c05ba216f7a283a6c8701f7ffcaad8034e9f7166d9"} Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.177390 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h2bnm"] Jan 23 06:22:29 crc kubenswrapper[4784]: E0123 06:22:29.177678 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd230e8a-2ec3-40e3-b964-66279c61bdfb" containerName="collect-profiles" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.177711 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd230e8a-2ec3-40e3-b964-66279c61bdfb" containerName="collect-profiles" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.177906 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd230e8a-2ec3-40e3-b964-66279c61bdfb" containerName="collect-profiles" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.182865 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.189289 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.241132 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fmdv\" (UniqueName: \"kubernetes.io/projected/4bf2bb81-ee53-475a-9648-987ec2d1adb2-kube-api-access-5fmdv\") pod \"community-operators-h2bnm\" (UID: \"4bf2bb81-ee53-475a-9648-987ec2d1adb2\") " pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.241182 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf2bb81-ee53-475a-9648-987ec2d1adb2-utilities\") pod \"community-operators-h2bnm\" (UID: \"4bf2bb81-ee53-475a-9648-987ec2d1adb2\") " pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.241228 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf2bb81-ee53-475a-9648-987ec2d1adb2-catalog-content\") pod \"community-operators-h2bnm\" (UID: \"4bf2bb81-ee53-475a-9648-987ec2d1adb2\") " pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.243385 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:29 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:29 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:29 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.243429 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.347700 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fmdv\" (UniqueName: \"kubernetes.io/projected/4bf2bb81-ee53-475a-9648-987ec2d1adb2-kube-api-access-5fmdv\") pod \"community-operators-h2bnm\" (UID: \"4bf2bb81-ee53-475a-9648-987ec2d1adb2\") " pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.348089 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf2bb81-ee53-475a-9648-987ec2d1adb2-utilities\") pod \"community-operators-h2bnm\" (UID: \"4bf2bb81-ee53-475a-9648-987ec2d1adb2\") " pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.348122 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf2bb81-ee53-475a-9648-987ec2d1adb2-catalog-content\") pod \"community-operators-h2bnm\" (UID: \"4bf2bb81-ee53-475a-9648-987ec2d1adb2\") " pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.350005 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf2bb81-ee53-475a-9648-987ec2d1adb2-catalog-content\") pod \"community-operators-h2bnm\" (UID: \"4bf2bb81-ee53-475a-9648-987ec2d1adb2\") " pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.350116 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf2bb81-ee53-475a-9648-987ec2d1adb2-utilities\") pod \"community-operators-h2bnm\" (UID: \"4bf2bb81-ee53-475a-9648-987ec2d1adb2\") " pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.412328 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.413185 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6q96w" event={"ID":"409ee6bf-e36a-4a14-9223-32c726962eab","Type":"ContainerStarted","Data":"f31974d4854ac268ca4d6f7916698ebc829f25283c61ad2049241f26d3945253"} Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.413224 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0e755ab0f98a506a5139dcf8731c34564f98cadbddf3ed35ffd84432671c685c"} Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.413243 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f4dc8b2e-1307-4725-b9cf-0b538251378a","Type":"ContainerStarted","Data":"c6fa7f3780e66af374fdaef68f591b02a1ba5718707e17c0639b3dfe0ddf717a"} Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.413256 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4f5ed51e1cc8a03bcfd91b10e9e3d93e51c7344b8c209a9d6e16971816d57c00"} Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.413273 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5vfdf"] Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.414641 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.414668 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bv64c"] Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.415617 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bv64c"] Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.415641 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vfdf"] Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.419090 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.421293 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.421370 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h2bnm"] Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.421593 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.427274 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.440147 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fmdv\" (UniqueName: \"kubernetes.io/projected/4bf2bb81-ee53-475a-9648-987ec2d1adb2-kube-api-access-5fmdv\") pod \"community-operators-h2bnm\" (UID: \"4bf2bb81-ee53-475a-9648-987ec2d1adb2\") " pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.449197 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r2dr\" (UniqueName: \"kubernetes.io/projected/64c4e525-6765-4230-b129-3364819dfa47-kube-api-access-4r2dr\") pod \"certified-operators-5vfdf\" (UID: \"64c4e525-6765-4230-b129-3364819dfa47\") " pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.449306 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c4e525-6765-4230-b129-3364819dfa47-utilities\") pod \"certified-operators-5vfdf\" (UID: \"64c4e525-6765-4230-b129-3364819dfa47\") " pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.449331 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0771b3f9-d762-4e52-8433-b1802a8c2201-catalog-content\") pod \"community-operators-bv64c\" (UID: \"0771b3f9-d762-4e52-8433-b1802a8c2201\") " pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.449374 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0771b3f9-d762-4e52-8433-b1802a8c2201-utilities\") pod \"community-operators-bv64c\" (UID: \"0771b3f9-d762-4e52-8433-b1802a8c2201\") " pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.449390 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4285s\" (UniqueName: \"kubernetes.io/projected/0771b3f9-d762-4e52-8433-b1802a8c2201-kube-api-access-4285s\") pod \"community-operators-bv64c\" (UID: \"0771b3f9-d762-4e52-8433-b1802a8c2201\") " pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.449410 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c4e525-6765-4230-b129-3364819dfa47-catalog-content\") pod \"certified-operators-5vfdf\" (UID: \"64c4e525-6765-4230-b129-3364819dfa47\") " pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.487208 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-whf7w"] Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.488661 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.551244 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s55qz\" (UniqueName: \"kubernetes.io/projected/8cd87290-f925-44f2-b7a6-ec3172726ad6-kube-api-access-s55qz\") pod \"certified-operators-whf7w\" (UID: \"8cd87290-f925-44f2-b7a6-ec3172726ad6\") " pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.551305 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r2dr\" (UniqueName: \"kubernetes.io/projected/64c4e525-6765-4230-b129-3364819dfa47-kube-api-access-4r2dr\") pod \"certified-operators-5vfdf\" (UID: \"64c4e525-6765-4230-b129-3364819dfa47\") " pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.551378 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c4e525-6765-4230-b129-3364819dfa47-utilities\") pod \"certified-operators-5vfdf\" (UID: \"64c4e525-6765-4230-b129-3364819dfa47\") " pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.551401 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0771b3f9-d762-4e52-8433-b1802a8c2201-catalog-content\") pod \"community-operators-bv64c\" (UID: \"0771b3f9-d762-4e52-8433-b1802a8c2201\") " pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.551424 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd87290-f925-44f2-b7a6-ec3172726ad6-utilities\") pod \"certified-operators-whf7w\" (UID: \"8cd87290-f925-44f2-b7a6-ec3172726ad6\") " pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.551466 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd87290-f925-44f2-b7a6-ec3172726ad6-catalog-content\") pod \"certified-operators-whf7w\" (UID: \"8cd87290-f925-44f2-b7a6-ec3172726ad6\") " pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.551492 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0771b3f9-d762-4e52-8433-b1802a8c2201-utilities\") pod \"community-operators-bv64c\" (UID: \"0771b3f9-d762-4e52-8433-b1802a8c2201\") " pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.551517 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4285s\" (UniqueName: \"kubernetes.io/projected/0771b3f9-d762-4e52-8433-b1802a8c2201-kube-api-access-4285s\") pod \"community-operators-bv64c\" (UID: \"0771b3f9-d762-4e52-8433-b1802a8c2201\") " pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.551541 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c4e525-6765-4230-b129-3364819dfa47-catalog-content\") pod \"certified-operators-5vfdf\" (UID: \"64c4e525-6765-4230-b129-3364819dfa47\") " pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.552410 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c4e525-6765-4230-b129-3364819dfa47-catalog-content\") pod \"certified-operators-5vfdf\" (UID: \"64c4e525-6765-4230-b129-3364819dfa47\") " pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.556836 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c4e525-6765-4230-b129-3364819dfa47-utilities\") pod \"certified-operators-5vfdf\" (UID: \"64c4e525-6765-4230-b129-3364819dfa47\") " pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.557179 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0771b3f9-d762-4e52-8433-b1802a8c2201-catalog-content\") pod \"community-operators-bv64c\" (UID: \"0771b3f9-d762-4e52-8433-b1802a8c2201\") " pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.557428 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0771b3f9-d762-4e52-8433-b1802a8c2201-utilities\") pod \"community-operators-bv64c\" (UID: \"0771b3f9-d762-4e52-8433-b1802a8c2201\") " pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.586389 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.618178 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r2dr\" (UniqueName: \"kubernetes.io/projected/64c4e525-6765-4230-b129-3364819dfa47-kube-api-access-4r2dr\") pod \"certified-operators-5vfdf\" (UID: \"64c4e525-6765-4230-b129-3364819dfa47\") " pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.621630 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-whf7w"] Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.644958 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4285s\" (UniqueName: \"kubernetes.io/projected/0771b3f9-d762-4e52-8433-b1802a8c2201-kube-api-access-4285s\") pod \"community-operators-bv64c\" (UID: \"0771b3f9-d762-4e52-8433-b1802a8c2201\") " pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.658812 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd87290-f925-44f2-b7a6-ec3172726ad6-utilities\") pod \"certified-operators-whf7w\" (UID: \"8cd87290-f925-44f2-b7a6-ec3172726ad6\") " pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.660552 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd87290-f925-44f2-b7a6-ec3172726ad6-catalog-content\") pod \"certified-operators-whf7w\" (UID: \"8cd87290-f925-44f2-b7a6-ec3172726ad6\") " pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.660626 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s55qz\" (UniqueName: \"kubernetes.io/projected/8cd87290-f925-44f2-b7a6-ec3172726ad6-kube-api-access-s55qz\") pod \"certified-operators-whf7w\" (UID: \"8cd87290-f925-44f2-b7a6-ec3172726ad6\") " pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.659771 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd87290-f925-44f2-b7a6-ec3172726ad6-utilities\") pod \"certified-operators-whf7w\" (UID: \"8cd87290-f925-44f2-b7a6-ec3172726ad6\") " pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.661450 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6q96w" podStartSLOduration=16.66142917 podStartE2EDuration="16.66142917s" podCreationTimestamp="2026-01-23 06:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:29.660227217 +0000 UTC m=+152.892735191" watchObservedRunningTime="2026-01-23 06:22:29.66142917 +0000 UTC m=+152.893937144" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.661502 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd87290-f925-44f2-b7a6-ec3172726ad6-catalog-content\") pod \"certified-operators-whf7w\" (UID: \"8cd87290-f925-44f2-b7a6-ec3172726ad6\") " pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.695866 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s55qz\" (UniqueName: \"kubernetes.io/projected/8cd87290-f925-44f2-b7a6-ec3172726ad6-kube-api-access-s55qz\") pod \"certified-operators-whf7w\" (UID: \"8cd87290-f925-44f2-b7a6-ec3172726ad6\") " pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.779337 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fzmbh"] Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.810092 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:22:29 crc kubenswrapper[4784]: W0123 06:22:29.814540 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod039c07e3_0dbc_4dd7_9984_5125cc13c6ff.slice/crio-76bcc8558b78e14ddddc986f42eb4c9e6c7578448a704e36981c29da2e1bd14e WatchSource:0}: Error finding container 76bcc8558b78e14ddddc986f42eb4c9e6c7578448a704e36981c29da2e1bd14e: Status 404 returned error can't find the container with id 76bcc8558b78e14ddddc986f42eb4c9e6c7578448a704e36981c29da2e1bd14e Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.887168 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:22:29 crc kubenswrapper[4784]: I0123 06:22:29.978667 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.142389 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.142931 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.148254 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.248228 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:30 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:30 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:30 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.248361 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.501275 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.508441 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-hhgpf" Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.508976 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"daa71dd0-d394-43db-9bef-e504248bdd60","Type":"ContainerStarted","Data":"9112305713daace2c743c0b70279f99e29f1e9bd528956f0b6a81fd7f3082ba8"} Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.510004 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" event={"ID":"039c07e3-0dbc-4dd7-9984-5125cc13c6ff","Type":"ContainerStarted","Data":"76bcc8558b78e14ddddc986f42eb4c9e6c7578448a704e36981c29da2e1bd14e"} Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.512401 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f4dc8b2e-1307-4725-b9cf-0b538251378a","Type":"ContainerStarted","Data":"54d1d9b44e1a92ace413435e6613f1a1b09b0069ad341c301ca393dc4fb667a4"} Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.546052 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qgmbq" Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.649687 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.649666695 podStartE2EDuration="4.649666695s" podCreationTimestamp="2026-01-23 06:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:30.616215614 +0000 UTC m=+153.848723578" watchObservedRunningTime="2026-01-23 06:22:30.649666695 +0000 UTC m=+153.882174669" Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.810573 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2rk5j"] Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.818573 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.826547 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.846031 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2rk5j"] Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.928714 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-utilities\") pod \"redhat-marketplace-2rk5j\" (UID: \"9a96ea8e-c45f-4799-886a-aef90c8b8e1a\") " pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.928815 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phs62\" (UniqueName: \"kubernetes.io/projected/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-kube-api-access-phs62\") pod \"redhat-marketplace-2rk5j\" (UID: \"9a96ea8e-c45f-4799-886a-aef90c8b8e1a\") " pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:22:30 crc kubenswrapper[4784]: I0123 06:22:30.928878 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-catalog-content\") pod \"redhat-marketplace-2rk5j\" (UID: \"9a96ea8e-c45f-4799-886a-aef90c8b8e1a\") " pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.030419 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-utilities\") pod \"redhat-marketplace-2rk5j\" (UID: \"9a96ea8e-c45f-4799-886a-aef90c8b8e1a\") " pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.030511 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phs62\" (UniqueName: \"kubernetes.io/projected/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-kube-api-access-phs62\") pod \"redhat-marketplace-2rk5j\" (UID: \"9a96ea8e-c45f-4799-886a-aef90c8b8e1a\") " pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.030580 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-catalog-content\") pod \"redhat-marketplace-2rk5j\" (UID: \"9a96ea8e-c45f-4799-886a-aef90c8b8e1a\") " pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.031208 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-catalog-content\") pod \"redhat-marketplace-2rk5j\" (UID: \"9a96ea8e-c45f-4799-886a-aef90c8b8e1a\") " pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.031199 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-utilities\") pod \"redhat-marketplace-2rk5j\" (UID: \"9a96ea8e-c45f-4799-886a-aef90c8b8e1a\") " pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.067398 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phs62\" (UniqueName: \"kubernetes.io/projected/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-kube-api-access-phs62\") pod \"redhat-marketplace-2rk5j\" (UID: \"9a96ea8e-c45f-4799-886a-aef90c8b8e1a\") " pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.122681 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h2bnm"] Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.147632 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.219014 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gjhhq"] Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.240889 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjhhq"] Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.241083 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.249025 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:31 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:31 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:31 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.249091 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.340309 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lngv\" (UniqueName: \"kubernetes.io/projected/0a480350-f75f-4866-bca5-3c8a6793ad46-kube-api-access-6lngv\") pod \"redhat-marketplace-gjhhq\" (UID: \"0a480350-f75f-4866-bca5-3c8a6793ad46\") " pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.340395 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a480350-f75f-4866-bca5-3c8a6793ad46-utilities\") pod \"redhat-marketplace-gjhhq\" (UID: \"0a480350-f75f-4866-bca5-3c8a6793ad46\") " pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.340484 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a480350-f75f-4866-bca5-3c8a6793ad46-catalog-content\") pod \"redhat-marketplace-gjhhq\" (UID: \"0a480350-f75f-4866-bca5-3c8a6793ad46\") " pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.442517 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lngv\" (UniqueName: \"kubernetes.io/projected/0a480350-f75f-4866-bca5-3c8a6793ad46-kube-api-access-6lngv\") pod \"redhat-marketplace-gjhhq\" (UID: \"0a480350-f75f-4866-bca5-3c8a6793ad46\") " pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.442602 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a480350-f75f-4866-bca5-3c8a6793ad46-utilities\") pod \"redhat-marketplace-gjhhq\" (UID: \"0a480350-f75f-4866-bca5-3c8a6793ad46\") " pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.442656 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a480350-f75f-4866-bca5-3c8a6793ad46-catalog-content\") pod \"redhat-marketplace-gjhhq\" (UID: \"0a480350-f75f-4866-bca5-3c8a6793ad46\") " pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.443427 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a480350-f75f-4866-bca5-3c8a6793ad46-catalog-content\") pod \"redhat-marketplace-gjhhq\" (UID: \"0a480350-f75f-4866-bca5-3c8a6793ad46\") " pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.444540 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a480350-f75f-4866-bca5-3c8a6793ad46-utilities\") pod \"redhat-marketplace-gjhhq\" (UID: \"0a480350-f75f-4866-bca5-3c8a6793ad46\") " pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.474136 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lngv\" (UniqueName: \"kubernetes.io/projected/0a480350-f75f-4866-bca5-3c8a6793ad46-kube-api-access-6lngv\") pod \"redhat-marketplace-gjhhq\" (UID: \"0a480350-f75f-4866-bca5-3c8a6793ad46\") " pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.492892 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-whf7w"] Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.527065 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2bnm" event={"ID":"4bf2bb81-ee53-475a-9648-987ec2d1adb2","Type":"ContainerStarted","Data":"05110e7b361821c9f1d53ff6618a2b7556732967047c2a0f56d22b24e1cff8ea"} Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.538265 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" event={"ID":"039c07e3-0dbc-4dd7-9984-5125cc13c6ff","Type":"ContainerStarted","Data":"ea986b64007d6d2b394a5d1031f41b9b047e3ac574cd7c3920d61143e6433902"} Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.538836 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.548213 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vfdf"] Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.551539 4784 generic.go:334] "Generic (PLEG): container finished" podID="f4dc8b2e-1307-4725-b9cf-0b538251378a" containerID="54d1d9b44e1a92ace413435e6613f1a1b09b0069ad341c301ca393dc4fb667a4" exitCode=0 Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.551816 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f4dc8b2e-1307-4725-b9cf-0b538251378a","Type":"ContainerDied","Data":"54d1d9b44e1a92ace413435e6613f1a1b09b0069ad341c301ca393dc4fb667a4"} Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.573779 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bv64c"] Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.587949 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"daa71dd0-d394-43db-9bef-e504248bdd60","Type":"ContainerStarted","Data":"bc1fd85ba02c6679aec9171d45117aa90abd30477be17c57857a16246cfae286"} Jan 23 06:22:31 crc kubenswrapper[4784]: W0123 06:22:31.602319 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64c4e525_6765_4230_b129_3364819dfa47.slice/crio-06fee6ff948d4d94b95a8623efc916450237bc89c5e006adb208bde703a7c86e WatchSource:0}: Error finding container 06fee6ff948d4d94b95a8623efc916450237bc89c5e006adb208bde703a7c86e: Status 404 returned error can't find the container with id 06fee6ff948d4d94b95a8623efc916450237bc89c5e006adb208bde703a7c86e Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.673831 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" podStartSLOduration=133.673800326 podStartE2EDuration="2m13.673800326s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:31.656427533 +0000 UTC m=+154.888935527" watchObservedRunningTime="2026-01-23 06:22:31.673800326 +0000 UTC m=+154.906308300" Jan 23 06:22:31 crc kubenswrapper[4784]: I0123 06:22:31.705873 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.031142 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=5.031121665 podStartE2EDuration="5.031121665s" podCreationTimestamp="2026-01-23 06:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:22:31.803472847 +0000 UTC m=+155.035980821" watchObservedRunningTime="2026-01-23 06:22:32.031121665 +0000 UTC m=+155.263629639" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.032252 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6vbpl"] Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.033290 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.035569 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.047900 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6vbpl"] Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.157023 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2rk5j"] Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.190790 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2daccf7-5481-4092-a720-045f3e033b62-utilities\") pod \"redhat-operators-6vbpl\" (UID: \"f2daccf7-5481-4092-a720-045f3e033b62\") " pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.190854 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2daccf7-5481-4092-a720-045f3e033b62-catalog-content\") pod \"redhat-operators-6vbpl\" (UID: \"f2daccf7-5481-4092-a720-045f3e033b62\") " pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.190906 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwt2b\" (UniqueName: \"kubernetes.io/projected/f2daccf7-5481-4092-a720-045f3e033b62-kube-api-access-vwt2b\") pod \"redhat-operators-6vbpl\" (UID: \"f2daccf7-5481-4092-a720-045f3e033b62\") " pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.251106 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:32 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:32 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:32 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.251231 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.292941 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2daccf7-5481-4092-a720-045f3e033b62-utilities\") pod \"redhat-operators-6vbpl\" (UID: \"f2daccf7-5481-4092-a720-045f3e033b62\") " pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.293417 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2daccf7-5481-4092-a720-045f3e033b62-catalog-content\") pod \"redhat-operators-6vbpl\" (UID: \"f2daccf7-5481-4092-a720-045f3e033b62\") " pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.293473 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwt2b\" (UniqueName: \"kubernetes.io/projected/f2daccf7-5481-4092-a720-045f3e033b62-kube-api-access-vwt2b\") pod \"redhat-operators-6vbpl\" (UID: \"f2daccf7-5481-4092-a720-045f3e033b62\") " pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.296372 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2daccf7-5481-4092-a720-045f3e033b62-catalog-content\") pod \"redhat-operators-6vbpl\" (UID: \"f2daccf7-5481-4092-a720-045f3e033b62\") " pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.296687 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2daccf7-5481-4092-a720-045f3e033b62-utilities\") pod \"redhat-operators-6vbpl\" (UID: \"f2daccf7-5481-4092-a720-045f3e033b62\") " pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.354115 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwt2b\" (UniqueName: \"kubernetes.io/projected/f2daccf7-5481-4092-a720-045f3e033b62-kube-api-access-vwt2b\") pod \"redhat-operators-6vbpl\" (UID: \"f2daccf7-5481-4092-a720-045f3e033b62\") " pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.415350 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zkrz6"] Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.424233 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.430934 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zkrz6"] Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.453444 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.460676 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjhhq"] Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.496635 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de559431-551a-4057-96ec-37537d6eddc8-utilities\") pod \"redhat-operators-zkrz6\" (UID: \"de559431-551a-4057-96ec-37537d6eddc8\") " pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.496791 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de559431-551a-4057-96ec-37537d6eddc8-catalog-content\") pod \"redhat-operators-zkrz6\" (UID: \"de559431-551a-4057-96ec-37537d6eddc8\") " pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.496817 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr877\" (UniqueName: \"kubernetes.io/projected/de559431-551a-4057-96ec-37537d6eddc8-kube-api-access-nr877\") pod \"redhat-operators-zkrz6\" (UID: \"de559431-551a-4057-96ec-37537d6eddc8\") " pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.595004 4784 generic.go:334] "Generic (PLEG): container finished" podID="64c4e525-6765-4230-b129-3364819dfa47" containerID="aefb9d37a2f550d5074d6dbe0749570b6a1e6d31edcabfc29687498f4f61ce3d" exitCode=0 Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.595101 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vfdf" event={"ID":"64c4e525-6765-4230-b129-3364819dfa47","Type":"ContainerDied","Data":"aefb9d37a2f550d5074d6dbe0749570b6a1e6d31edcabfc29687498f4f61ce3d"} Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.595143 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vfdf" event={"ID":"64c4e525-6765-4230-b129-3364819dfa47","Type":"ContainerStarted","Data":"06fee6ff948d4d94b95a8623efc916450237bc89c5e006adb208bde703a7c86e"} Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.598382 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de559431-551a-4057-96ec-37537d6eddc8-catalog-content\") pod \"redhat-operators-zkrz6\" (UID: \"de559431-551a-4057-96ec-37537d6eddc8\") " pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.598410 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr877\" (UniqueName: \"kubernetes.io/projected/de559431-551a-4057-96ec-37537d6eddc8-kube-api-access-nr877\") pod \"redhat-operators-zkrz6\" (UID: \"de559431-551a-4057-96ec-37537d6eddc8\") " pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.598503 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de559431-551a-4057-96ec-37537d6eddc8-utilities\") pod \"redhat-operators-zkrz6\" (UID: \"de559431-551a-4057-96ec-37537d6eddc8\") " pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.599118 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de559431-551a-4057-96ec-37537d6eddc8-utilities\") pod \"redhat-operators-zkrz6\" (UID: \"de559431-551a-4057-96ec-37537d6eddc8\") " pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.599353 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de559431-551a-4057-96ec-37537d6eddc8-catalog-content\") pod \"redhat-operators-zkrz6\" (UID: \"de559431-551a-4057-96ec-37537d6eddc8\") " pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.600695 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.603027 4784 generic.go:334] "Generic (PLEG): container finished" podID="8cd87290-f925-44f2-b7a6-ec3172726ad6" containerID="1a356bfb43910d03a67832ca71365806328527307891aa32a9cfff62e9935091" exitCode=0 Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.603098 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whf7w" event={"ID":"8cd87290-f925-44f2-b7a6-ec3172726ad6","Type":"ContainerDied","Data":"1a356bfb43910d03a67832ca71365806328527307891aa32a9cfff62e9935091"} Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.603172 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whf7w" event={"ID":"8cd87290-f925-44f2-b7a6-ec3172726ad6","Type":"ContainerStarted","Data":"ca5523cb11dac990f9773164ea5e2eaad7a52e9c5ff4f17d4590a06c20752eb0"} Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.621550 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr877\" (UniqueName: \"kubernetes.io/projected/de559431-551a-4057-96ec-37537d6eddc8-kube-api-access-nr877\") pod \"redhat-operators-zkrz6\" (UID: \"de559431-551a-4057-96ec-37537d6eddc8\") " pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.621792 4784 generic.go:334] "Generic (PLEG): container finished" podID="daa71dd0-d394-43db-9bef-e504248bdd60" containerID="bc1fd85ba02c6679aec9171d45117aa90abd30477be17c57857a16246cfae286" exitCode=0 Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.622009 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"daa71dd0-d394-43db-9bef-e504248bdd60","Type":"ContainerDied","Data":"bc1fd85ba02c6679aec9171d45117aa90abd30477be17c57857a16246cfae286"} Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.633221 4784 generic.go:334] "Generic (PLEG): container finished" podID="4bf2bb81-ee53-475a-9648-987ec2d1adb2" containerID="bdfd7c8aef631e24f1ac9d2b5919f98e5ff37429e03b3ce83b6c86397e56de63" exitCode=0 Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.634793 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2bnm" event={"ID":"4bf2bb81-ee53-475a-9648-987ec2d1adb2","Type":"ContainerDied","Data":"bdfd7c8aef631e24f1ac9d2b5919f98e5ff37429e03b3ce83b6c86397e56de63"} Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.636971 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjhhq" event={"ID":"0a480350-f75f-4866-bca5-3c8a6793ad46","Type":"ContainerStarted","Data":"84ac10c5c488300bf63eebc0234aeebc98691e2b535f675454507f0035c2ce09"} Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.651321 4784 generic.go:334] "Generic (PLEG): container finished" podID="0771b3f9-d762-4e52-8433-b1802a8c2201" containerID="690059e8cb4156cf8a188e4f08c27ec969aa14013639c0c8910906b7691bf208" exitCode=0 Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.651481 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv64c" event={"ID":"0771b3f9-d762-4e52-8433-b1802a8c2201","Type":"ContainerDied","Data":"690059e8cb4156cf8a188e4f08c27ec969aa14013639c0c8910906b7691bf208"} Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.651533 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv64c" event={"ID":"0771b3f9-d762-4e52-8433-b1802a8c2201","Type":"ContainerStarted","Data":"9720827a26222161782cdfd4bd2feccd7461c179784ff0aca8c139963121a4f1"} Jan 23 06:22:32 crc kubenswrapper[4784]: I0123 06:22:32.677120 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rk5j" event={"ID":"9a96ea8e-c45f-4799-886a-aef90c8b8e1a","Type":"ContainerStarted","Data":"7c311f689a9fb56a213d3ee08d7fa607c57af2e3d31aa819ffd7c8650316f48f"} Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:32.997657 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.129563 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6vbpl"] Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.309182 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:33 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:33 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:33 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.309261 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.517121 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.651860 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4dc8b2e-1307-4725-b9cf-0b538251378a-kubelet-dir\") pod \"f4dc8b2e-1307-4725-b9cf-0b538251378a\" (UID: \"f4dc8b2e-1307-4725-b9cf-0b538251378a\") " Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.652044 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4dc8b2e-1307-4725-b9cf-0b538251378a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f4dc8b2e-1307-4725-b9cf-0b538251378a" (UID: "f4dc8b2e-1307-4725-b9cf-0b538251378a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.653322 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4dc8b2e-1307-4725-b9cf-0b538251378a-kube-api-access\") pod \"f4dc8b2e-1307-4725-b9cf-0b538251378a\" (UID: \"f4dc8b2e-1307-4725-b9cf-0b538251378a\") " Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.655352 4784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4dc8b2e-1307-4725-b9cf-0b538251378a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.684099 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4dc8b2e-1307-4725-b9cf-0b538251378a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f4dc8b2e-1307-4725-b9cf-0b538251378a" (UID: "f4dc8b2e-1307-4725-b9cf-0b538251378a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.703891 4784 generic.go:334] "Generic (PLEG): container finished" podID="0a480350-f75f-4866-bca5-3c8a6793ad46" containerID="6c95480ef4afe90a0d24ccd05ca1e1100f0e5a6f01ff9e89975c8dd93dd1ded0" exitCode=0 Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.704011 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjhhq" event={"ID":"0a480350-f75f-4866-bca5-3c8a6793ad46","Type":"ContainerDied","Data":"6c95480ef4afe90a0d24ccd05ca1e1100f0e5a6f01ff9e89975c8dd93dd1ded0"} Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.744997 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vbpl" event={"ID":"f2daccf7-5481-4092-a720-045f3e033b62","Type":"ContainerStarted","Data":"264ec505d70a5404b755dbd0324055d4e5027c61ba1caad48a4866e0ec3b98a1"} Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.748030 4784 generic.go:334] "Generic (PLEG): container finished" podID="9a96ea8e-c45f-4799-886a-aef90c8b8e1a" containerID="5cb2946608e7f3fdb1521687baa4868eb59b42189671e22b1afb75e1b26e750e" exitCode=0 Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.748131 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rk5j" event={"ID":"9a96ea8e-c45f-4799-886a-aef90c8b8e1a","Type":"ContainerDied","Data":"5cb2946608e7f3fdb1521687baa4868eb59b42189671e22b1afb75e1b26e750e"} Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.756376 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4dc8b2e-1307-4725-b9cf-0b538251378a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.760074 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.771001 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f4dc8b2e-1307-4725-b9cf-0b538251378a","Type":"ContainerDied","Data":"c6fa7f3780e66af374fdaef68f591b02a1ba5718707e17c0639b3dfe0ddf717a"} Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.771088 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6fa7f3780e66af374fdaef68f591b02a1ba5718707e17c0639b3dfe0ddf717a" Jan 23 06:22:33 crc kubenswrapper[4784]: I0123 06:22:33.811134 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zkrz6"] Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.275037 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:34 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:34 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:34 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.275140 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.514286 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.534509 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daa71dd0-d394-43db-9bef-e504248bdd60-kube-api-access\") pod \"daa71dd0-d394-43db-9bef-e504248bdd60\" (UID: \"daa71dd0-d394-43db-9bef-e504248bdd60\") " Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.547154 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daa71dd0-d394-43db-9bef-e504248bdd60-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "daa71dd0-d394-43db-9bef-e504248bdd60" (UID: "daa71dd0-d394-43db-9bef-e504248bdd60"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.640837 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/daa71dd0-d394-43db-9bef-e504248bdd60-kubelet-dir\") pod \"daa71dd0-d394-43db-9bef-e504248bdd60\" (UID: \"daa71dd0-d394-43db-9bef-e504248bdd60\") " Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.641649 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/daa71dd0-d394-43db-9bef-e504248bdd60-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.641721 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/daa71dd0-d394-43db-9bef-e504248bdd60-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "daa71dd0-d394-43db-9bef-e504248bdd60" (UID: "daa71dd0-d394-43db-9bef-e504248bdd60"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.729142 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5st7s" Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.742853 4784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/daa71dd0-d394-43db-9bef-e504248bdd60-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.777289 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"daa71dd0-d394-43db-9bef-e504248bdd60","Type":"ContainerDied","Data":"9112305713daace2c743c0b70279f99e29f1e9bd528956f0b6a81fd7f3082ba8"} Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.777339 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9112305713daace2c743c0b70279f99e29f1e9bd528956f0b6a81fd7f3082ba8" Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.777473 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.804354 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkrz6" event={"ID":"de559431-551a-4057-96ec-37537d6eddc8","Type":"ContainerStarted","Data":"323eeb41926d8e8041f441e15c3b775323ccbcbbb6e7e1b4e14149af617d6e5f"} Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.809216 4784 generic.go:334] "Generic (PLEG): container finished" podID="f2daccf7-5481-4092-a720-045f3e033b62" containerID="5881eafaff6a9ec6c0a067a2845c8894841e1044e709d1d2dcef2d9ec73a26ee" exitCode=0 Jan 23 06:22:34 crc kubenswrapper[4784]: I0123 06:22:34.809281 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vbpl" event={"ID":"f2daccf7-5481-4092-a720-045f3e033b62","Type":"ContainerDied","Data":"5881eafaff6a9ec6c0a067a2845c8894841e1044e709d1d2dcef2d9ec73a26ee"} Jan 23 06:22:35 crc kubenswrapper[4784]: I0123 06:22:35.254408 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:35 crc kubenswrapper[4784]: [-]has-synced failed: reason withheld Jan 23 06:22:35 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:35 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:35 crc kubenswrapper[4784]: I0123 06:22:35.254961 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:36 crc kubenswrapper[4784]: I0123 06:22:36.014905 4784 generic.go:334] "Generic (PLEG): container finished" podID="de559431-551a-4057-96ec-37537d6eddc8" containerID="5f572091df69849c5b15f306dbcda542ec33f9d4e009db067a4bd325c36ceefa" exitCode=0 Jan 23 06:22:36 crc kubenswrapper[4784]: I0123 06:22:36.015012 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkrz6" event={"ID":"de559431-551a-4057-96ec-37537d6eddc8","Type":"ContainerDied","Data":"5f572091df69849c5b15f306dbcda542ec33f9d4e009db067a4bd325c36ceefa"} Jan 23 06:22:36 crc kubenswrapper[4784]: I0123 06:22:36.264914 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:22:36 crc kubenswrapper[4784]: [+]has-synced ok Jan 23 06:22:36 crc kubenswrapper[4784]: [+]process-running ok Jan 23 06:22:36 crc kubenswrapper[4784]: healthz check failed Jan 23 06:22:36 crc kubenswrapper[4784]: I0123 06:22:36.265002 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:22:36 crc kubenswrapper[4784]: I0123 06:22:36.466402 4784 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb5s2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 23 06:22:36 crc kubenswrapper[4784]: I0123 06:22:36.466470 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 23 06:22:36 crc kubenswrapper[4784]: I0123 06:22:36.466479 4784 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb5s2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 23 06:22:36 crc kubenswrapper[4784]: I0123 06:22:36.466562 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 23 06:22:36 crc kubenswrapper[4784]: I0123 06:22:36.749064 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:22:36 crc kubenswrapper[4784]: I0123 06:22:36.855275 4784 patch_prober.go:28] interesting pod/console-f9d7485db-2stcb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 23 06:22:36 crc kubenswrapper[4784]: I0123 06:22:36.855352 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2stcb" podUID="b6c8a935-b603-40f3-8051-c705e23c20f3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 23 06:22:36 crc kubenswrapper[4784]: I0123 06:22:36.901824 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sk4wc" Jan 23 06:22:37 crc kubenswrapper[4784]: I0123 06:22:37.289998 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:37 crc kubenswrapper[4784]: I0123 06:22:37.396001 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-gvzxz" Jan 23 06:22:40 crc kubenswrapper[4784]: I0123 06:22:40.932793 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs\") pod \"network-metrics-daemon-lcdgv\" (UID: \"cdf947ef-7279-4d43-854c-d836e0043e5b\") " pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:22:40 crc kubenswrapper[4784]: I0123 06:22:40.943234 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cdf947ef-7279-4d43-854c-d836e0043e5b-metrics-certs\") pod \"network-metrics-daemon-lcdgv\" (UID: \"cdf947ef-7279-4d43-854c-d836e0043e5b\") " pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:22:41 crc kubenswrapper[4784]: I0123 06:22:41.183525 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lcdgv" Jan 23 06:22:42 crc kubenswrapper[4784]: I0123 06:22:42.133593 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lcdgv"] Jan 23 06:22:42 crc kubenswrapper[4784]: W0123 06:22:42.153900 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdf947ef_7279_4d43_854c_d836e0043e5b.slice/crio-9ffd1975c6f5ac99ed330bb2d6ae52c28e462ee639e4f867d7d556336204590c WatchSource:0}: Error finding container 9ffd1975c6f5ac99ed330bb2d6ae52c28e462ee639e4f867d7d556336204590c: Status 404 returned error can't find the container with id 9ffd1975c6f5ac99ed330bb2d6ae52c28e462ee639e4f867d7d556336204590c Jan 23 06:22:42 crc kubenswrapper[4784]: I0123 06:22:42.535542 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" event={"ID":"cdf947ef-7279-4d43-854c-d836e0043e5b","Type":"ContainerStarted","Data":"9ffd1975c6f5ac99ed330bb2d6ae52c28e462ee639e4f867d7d556336204590c"} Jan 23 06:22:44 crc kubenswrapper[4784]: I0123 06:22:44.568113 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" event={"ID":"cdf947ef-7279-4d43-854c-d836e0043e5b","Type":"ContainerStarted","Data":"9e94949a36afbdc21221f7211437786c4869ac8dacfa541d8f86d8e18356c798"} Jan 23 06:22:46 crc kubenswrapper[4784]: I0123 06:22:46.473318 4784 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb5s2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 23 06:22:46 crc kubenswrapper[4784]: I0123 06:22:46.473720 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 23 06:22:46 crc kubenswrapper[4784]: I0123 06:22:46.473802 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-bb5s2" Jan 23 06:22:46 crc kubenswrapper[4784]: I0123 06:22:46.473371 4784 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb5s2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 23 06:22:46 crc kubenswrapper[4784]: I0123 06:22:46.473993 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 23 06:22:46 crc kubenswrapper[4784]: I0123 06:22:46.474550 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"8f44e8dbc7b5fd572f4179dab97ef9047e25478a13ef285f736eb4330cedf7cc"} pod="openshift-console/downloads-7954f5f757-bb5s2" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 23 06:22:46 crc kubenswrapper[4784]: I0123 06:22:46.474659 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" containerID="cri-o://8f44e8dbc7b5fd572f4179dab97ef9047e25478a13ef285f736eb4330cedf7cc" gracePeriod=2 Jan 23 06:22:46 crc kubenswrapper[4784]: I0123 06:22:46.474913 4784 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb5s2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 23 06:22:46 crc kubenswrapper[4784]: I0123 06:22:46.474974 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 23 06:22:46 crc kubenswrapper[4784]: I0123 06:22:46.860414 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:46 crc kubenswrapper[4784]: I0123 06:22:46.866630 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:22:47 crc kubenswrapper[4784]: I0123 06:22:47.639435 4784 generic.go:334] "Generic (PLEG): container finished" podID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerID="8f44e8dbc7b5fd572f4179dab97ef9047e25478a13ef285f736eb4330cedf7cc" exitCode=0 Jan 23 06:22:47 crc kubenswrapper[4784]: I0123 06:22:47.639803 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bb5s2" event={"ID":"b03d7aa3-b8a0-4725-b16d-908e50b963e4","Type":"ContainerDied","Data":"8f44e8dbc7b5fd572f4179dab97ef9047e25478a13ef285f736eb4330cedf7cc"} Jan 23 06:22:48 crc kubenswrapper[4784]: I0123 06:22:48.238489 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:22:53 crc kubenswrapper[4784]: I0123 06:22:53.603467 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:22:53 crc kubenswrapper[4784]: I0123 06:22:53.604261 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:22:56 crc kubenswrapper[4784]: I0123 06:22:56.469772 4784 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb5s2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 23 06:22:56 crc kubenswrapper[4784]: I0123 06:22:56.470587 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 23 06:22:56 crc kubenswrapper[4784]: I0123 06:22:56.665396 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-x5265" Jan 23 06:23:05 crc kubenswrapper[4784]: I0123 06:23:05.623009 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.466499 4784 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb5s2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.467148 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.525917 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 06:23:06 crc kubenswrapper[4784]: E0123 06:23:06.526177 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daa71dd0-d394-43db-9bef-e504248bdd60" containerName="pruner" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.526190 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="daa71dd0-d394-43db-9bef-e504248bdd60" containerName="pruner" Jan 23 06:23:06 crc kubenswrapper[4784]: E0123 06:23:06.526204 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dc8b2e-1307-4725-b9cf-0b538251378a" containerName="pruner" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.526210 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dc8b2e-1307-4725-b9cf-0b538251378a" containerName="pruner" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.526303 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4dc8b2e-1307-4725-b9cf-0b538251378a" containerName="pruner" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.526316 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="daa71dd0-d394-43db-9bef-e504248bdd60" containerName="pruner" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.526665 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.530316 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.530344 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.582233 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.635666 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56d17c4a-d0fc-4232-8194-5b2898e72307-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"56d17c4a-d0fc-4232-8194-5b2898e72307\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.636623 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56d17c4a-d0fc-4232-8194-5b2898e72307-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"56d17c4a-d0fc-4232-8194-5b2898e72307\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.737505 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56d17c4a-d0fc-4232-8194-5b2898e72307-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"56d17c4a-d0fc-4232-8194-5b2898e72307\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.738481 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56d17c4a-d0fc-4232-8194-5b2898e72307-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"56d17c4a-d0fc-4232-8194-5b2898e72307\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.739249 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56d17c4a-d0fc-4232-8194-5b2898e72307-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"56d17c4a-d0fc-4232-8194-5b2898e72307\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.761364 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56d17c4a-d0fc-4232-8194-5b2898e72307-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"56d17c4a-d0fc-4232-8194-5b2898e72307\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:23:06 crc kubenswrapper[4784]: I0123 06:23:06.919428 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:23:10 crc kubenswrapper[4784]: I0123 06:23:10.734264 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 06:23:10 crc kubenswrapper[4784]: I0123 06:23:10.735875 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:23:10 crc kubenswrapper[4784]: I0123 06:23:10.739776 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 06:23:10 crc kubenswrapper[4784]: I0123 06:23:10.798113 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3cba8aab-f67d-4ec7-99c1-6294655ebe56-kube-api-access\") pod \"installer-9-crc\" (UID: \"3cba8aab-f67d-4ec7-99c1-6294655ebe56\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:23:10 crc kubenswrapper[4784]: I0123 06:23:10.798302 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3cba8aab-f67d-4ec7-99c1-6294655ebe56-var-lock\") pod \"installer-9-crc\" (UID: \"3cba8aab-f67d-4ec7-99c1-6294655ebe56\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:23:10 crc kubenswrapper[4784]: I0123 06:23:10.798349 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3cba8aab-f67d-4ec7-99c1-6294655ebe56-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3cba8aab-f67d-4ec7-99c1-6294655ebe56\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:23:10 crc kubenswrapper[4784]: I0123 06:23:10.899569 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3cba8aab-f67d-4ec7-99c1-6294655ebe56-kube-api-access\") pod \"installer-9-crc\" (UID: \"3cba8aab-f67d-4ec7-99c1-6294655ebe56\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:23:10 crc kubenswrapper[4784]: I0123 06:23:10.899666 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3cba8aab-f67d-4ec7-99c1-6294655ebe56-var-lock\") pod \"installer-9-crc\" (UID: \"3cba8aab-f67d-4ec7-99c1-6294655ebe56\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:23:10 crc kubenswrapper[4784]: I0123 06:23:10.899697 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3cba8aab-f67d-4ec7-99c1-6294655ebe56-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3cba8aab-f67d-4ec7-99c1-6294655ebe56\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:23:10 crc kubenswrapper[4784]: I0123 06:23:10.899819 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3cba8aab-f67d-4ec7-99c1-6294655ebe56-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3cba8aab-f67d-4ec7-99c1-6294655ebe56\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:23:10 crc kubenswrapper[4784]: I0123 06:23:10.900028 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3cba8aab-f67d-4ec7-99c1-6294655ebe56-var-lock\") pod \"installer-9-crc\" (UID: \"3cba8aab-f67d-4ec7-99c1-6294655ebe56\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:23:10 crc kubenswrapper[4784]: I0123 06:23:10.926652 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3cba8aab-f67d-4ec7-99c1-6294655ebe56-kube-api-access\") pod \"installer-9-crc\" (UID: \"3cba8aab-f67d-4ec7-99c1-6294655ebe56\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:23:11 crc kubenswrapper[4784]: I0123 06:23:11.068296 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.430036 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.430287 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nr877,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zkrz6_openshift-marketplace(de559431-551a-4057-96ec-37537d6eddc8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.431781 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zkrz6" podUID="de559431-551a-4057-96ec-37537d6eddc8" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.499434 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.499917 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fmdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h2bnm_openshift-marketplace(4bf2bb81-ee53-475a-9648-987ec2d1adb2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.501366 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-h2bnm" podUID="4bf2bb81-ee53-475a-9648-987ec2d1adb2" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.542222 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.542591 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phs62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2rk5j_openshift-marketplace(9a96ea8e-c45f-4799-886a-aef90c8b8e1a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.543737 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2rk5j" podUID="9a96ea8e-c45f-4799-886a-aef90c8b8e1a" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.551950 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.552133 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s55qz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-whf7w_openshift-marketplace(8cd87290-f925-44f2-b7a6-ec3172726ad6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.553378 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-whf7w" podUID="8cd87290-f925-44f2-b7a6-ec3172726ad6" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.580401 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.580995 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwt2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6vbpl_openshift-marketplace(f2daccf7-5481-4092-a720-045f3e033b62): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.582183 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6vbpl" podUID="f2daccf7-5481-4092-a720-045f3e033b62" Jan 23 06:23:15 crc kubenswrapper[4784]: I0123 06:23:15.892218 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjhhq" event={"ID":"0a480350-f75f-4866-bca5-3c8a6793ad46","Type":"ContainerStarted","Data":"cd76713faf6fe538ad02013070d5c7ad1ce71046004ba586f0b1501c5649fc2c"} Jan 23 06:23:15 crc kubenswrapper[4784]: I0123 06:23:15.896671 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lcdgv" event={"ID":"cdf947ef-7279-4d43-854c-d836e0043e5b","Type":"ContainerStarted","Data":"e67c22bf57642bcc5297b51d06de77529141ab0debc0a22112a352c304233095"} Jan 23 06:23:15 crc kubenswrapper[4784]: I0123 06:23:15.903028 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 06:23:15 crc kubenswrapper[4784]: I0123 06:23:15.907293 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv64c" event={"ID":"0771b3f9-d762-4e52-8433-b1802a8c2201","Type":"ContainerStarted","Data":"4bc0ce3726e219bce9dec2bc0f0df3c7ddf65f707abdf04253f3010150628351"} Jan 23 06:23:15 crc kubenswrapper[4784]: W0123 06:23:15.914070 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3cba8aab_f67d_4ec7_99c1_6294655ebe56.slice/crio-eb94c2894653c8370cb0d4fe1e35ef9350207e4a0d3efc783ae0a333853d1964 WatchSource:0}: Error finding container eb94c2894653c8370cb0d4fe1e35ef9350207e4a0d3efc783ae0a333853d1964: Status 404 returned error can't find the container with id eb94c2894653c8370cb0d4fe1e35ef9350207e4a0d3efc783ae0a333853d1964 Jan 23 06:23:15 crc kubenswrapper[4784]: I0123 06:23:15.915156 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vfdf" event={"ID":"64c4e525-6765-4230-b129-3364819dfa47","Type":"ContainerStarted","Data":"ddbef208c73165a7a82c70565f5c1ddeeb992faf867726b5ab31158d1fcdaa2f"} Jan 23 06:23:15 crc kubenswrapper[4784]: I0123 06:23:15.921634 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bb5s2" event={"ID":"b03d7aa3-b8a0-4725-b16d-908e50b963e4","Type":"ContainerStarted","Data":"1a8160dba0398ac493e396623d7a32903ca45657822ea0c3cdd8b01a7ca6fffc"} Jan 23 06:23:15 crc kubenswrapper[4784]: I0123 06:23:15.922004 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bb5s2" Jan 23 06:23:15 crc kubenswrapper[4784]: I0123 06:23:15.922274 4784 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb5s2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 23 06:23:15 crc kubenswrapper[4784]: I0123 06:23:15.922314 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.928066 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zkrz6" podUID="de559431-551a-4057-96ec-37537d6eddc8" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.933348 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-whf7w" podUID="8cd87290-f925-44f2-b7a6-ec3172726ad6" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.933440 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h2bnm" podUID="4bf2bb81-ee53-475a-9648-987ec2d1adb2" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.933513 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2rk5j" podUID="9a96ea8e-c45f-4799-886a-aef90c8b8e1a" Jan 23 06:23:15 crc kubenswrapper[4784]: E0123 06:23:15.933615 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6vbpl" podUID="f2daccf7-5481-4092-a720-045f3e033b62" Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.021696 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.039011 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lcdgv" podStartSLOduration=178.038978794 podStartE2EDuration="2m58.038978794s" podCreationTimestamp="2026-01-23 06:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:23:16.035555881 +0000 UTC m=+199.268063865" watchObservedRunningTime="2026-01-23 06:23:16.038978794 +0000 UTC m=+199.271486768" Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.466844 4784 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb5s2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.467863 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.467049 4784 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb5s2 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.468291 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.928927 4784 generic.go:334] "Generic (PLEG): container finished" podID="0a480350-f75f-4866-bca5-3c8a6793ad46" containerID="cd76713faf6fe538ad02013070d5c7ad1ce71046004ba586f0b1501c5649fc2c" exitCode=0 Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.929057 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjhhq" event={"ID":"0a480350-f75f-4866-bca5-3c8a6793ad46","Type":"ContainerDied","Data":"cd76713faf6fe538ad02013070d5c7ad1ce71046004ba586f0b1501c5649fc2c"} Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.930284 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3cba8aab-f67d-4ec7-99c1-6294655ebe56","Type":"ContainerStarted","Data":"eb94c2894653c8370cb0d4fe1e35ef9350207e4a0d3efc783ae0a333853d1964"} Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.932394 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"56d17c4a-d0fc-4232-8194-5b2898e72307","Type":"ContainerStarted","Data":"446fdc41c58df2dc21a08c012050a6893df7e7f3ef1e5af9e00c38b326c0fbc9"} Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.935836 4784 generic.go:334] "Generic (PLEG): container finished" podID="0771b3f9-d762-4e52-8433-b1802a8c2201" containerID="4bc0ce3726e219bce9dec2bc0f0df3c7ddf65f707abdf04253f3010150628351" exitCode=0 Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.935896 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv64c" event={"ID":"0771b3f9-d762-4e52-8433-b1802a8c2201","Type":"ContainerDied","Data":"4bc0ce3726e219bce9dec2bc0f0df3c7ddf65f707abdf04253f3010150628351"} Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.941124 4784 generic.go:334] "Generic (PLEG): container finished" podID="64c4e525-6765-4230-b129-3364819dfa47" containerID="ddbef208c73165a7a82c70565f5c1ddeeb992faf867726b5ab31158d1fcdaa2f" exitCode=0 Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.941680 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vfdf" event={"ID":"64c4e525-6765-4230-b129-3364819dfa47","Type":"ContainerDied","Data":"ddbef208c73165a7a82c70565f5c1ddeeb992faf867726b5ab31158d1fcdaa2f"} Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.942299 4784 patch_prober.go:28] interesting pod/downloads-7954f5f757-bb5s2 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 23 06:23:16 crc kubenswrapper[4784]: I0123 06:23:16.942340 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bb5s2" podUID="b03d7aa3-b8a0-4725-b16d-908e50b963e4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 23 06:23:17 crc kubenswrapper[4784]: I0123 06:23:17.948149 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"56d17c4a-d0fc-4232-8194-5b2898e72307","Type":"ContainerStarted","Data":"45b5a95f9c4c26c3e62ec8dc315b8591dcd43a7becff81e6f28c3dbab8449b4f"} Jan 23 06:23:17 crc kubenswrapper[4784]: I0123 06:23:17.950065 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3cba8aab-f67d-4ec7-99c1-6294655ebe56","Type":"ContainerStarted","Data":"97c6a5d983a55bb26c67b194bbd95a34daf6ee7c63be985912f5d7bac214dce0"} Jan 23 06:23:17 crc kubenswrapper[4784]: I0123 06:23:17.966190 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=11.966169151999999 podStartE2EDuration="11.966169152s" podCreationTimestamp="2026-01-23 06:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:23:17.963460588 +0000 UTC m=+201.195968562" watchObservedRunningTime="2026-01-23 06:23:17.966169152 +0000 UTC m=+201.198677126" Jan 23 06:23:17 crc kubenswrapper[4784]: I0123 06:23:17.984289 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=7.984261373 podStartE2EDuration="7.984261373s" podCreationTimestamp="2026-01-23 06:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:23:17.978370083 +0000 UTC m=+201.210878057" watchObservedRunningTime="2026-01-23 06:23:17.984261373 +0000 UTC m=+201.216769347" Jan 23 06:23:18 crc kubenswrapper[4784]: I0123 06:23:18.958943 4784 generic.go:334] "Generic (PLEG): container finished" podID="56d17c4a-d0fc-4232-8194-5b2898e72307" containerID="45b5a95f9c4c26c3e62ec8dc315b8591dcd43a7becff81e6f28c3dbab8449b4f" exitCode=0 Jan 23 06:23:18 crc kubenswrapper[4784]: I0123 06:23:18.959037 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"56d17c4a-d0fc-4232-8194-5b2898e72307","Type":"ContainerDied","Data":"45b5a95f9c4c26c3e62ec8dc315b8591dcd43a7becff81e6f28c3dbab8449b4f"} Jan 23 06:23:20 crc kubenswrapper[4784]: I0123 06:23:20.225325 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:23:20 crc kubenswrapper[4784]: I0123 06:23:20.363379 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56d17c4a-d0fc-4232-8194-5b2898e72307-kube-api-access\") pod \"56d17c4a-d0fc-4232-8194-5b2898e72307\" (UID: \"56d17c4a-d0fc-4232-8194-5b2898e72307\") " Jan 23 06:23:20 crc kubenswrapper[4784]: I0123 06:23:20.363638 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56d17c4a-d0fc-4232-8194-5b2898e72307-kubelet-dir\") pod \"56d17c4a-d0fc-4232-8194-5b2898e72307\" (UID: \"56d17c4a-d0fc-4232-8194-5b2898e72307\") " Jan 23 06:23:20 crc kubenswrapper[4784]: I0123 06:23:20.363817 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d17c4a-d0fc-4232-8194-5b2898e72307-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "56d17c4a-d0fc-4232-8194-5b2898e72307" (UID: "56d17c4a-d0fc-4232-8194-5b2898e72307"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:23:20 crc kubenswrapper[4784]: I0123 06:23:20.373018 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d17c4a-d0fc-4232-8194-5b2898e72307-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "56d17c4a-d0fc-4232-8194-5b2898e72307" (UID: "56d17c4a-d0fc-4232-8194-5b2898e72307"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:23:20 crc kubenswrapper[4784]: I0123 06:23:20.465110 4784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56d17c4a-d0fc-4232-8194-5b2898e72307-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:20 crc kubenswrapper[4784]: I0123 06:23:20.465170 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56d17c4a-d0fc-4232-8194-5b2898e72307-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:20 crc kubenswrapper[4784]: I0123 06:23:20.975666 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"56d17c4a-d0fc-4232-8194-5b2898e72307","Type":"ContainerDied","Data":"446fdc41c58df2dc21a08c012050a6893df7e7f3ef1e5af9e00c38b326c0fbc9"} Jan 23 06:23:20 crc kubenswrapper[4784]: I0123 06:23:20.976059 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="446fdc41c58df2dc21a08c012050a6893df7e7f3ef1e5af9e00c38b326c0fbc9" Jan 23 06:23:20 crc kubenswrapper[4784]: I0123 06:23:20.975976 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:23:21 crc kubenswrapper[4784]: I0123 06:23:21.984717 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjhhq" event={"ID":"0a480350-f75f-4866-bca5-3c8a6793ad46","Type":"ContainerStarted","Data":"ab852ca50daf99a54bc12e850377e5d69a84ac82f1bff9de4332225e87d4d851"} Jan 23 06:23:22 crc kubenswrapper[4784]: I0123 06:23:22.014046 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gjhhq" podStartSLOduration=3.338617577 podStartE2EDuration="51.01401504s" podCreationTimestamp="2026-01-23 06:22:31 +0000 UTC" firstStartedPulling="2026-01-23 06:22:33.711549914 +0000 UTC m=+156.944057898" lastFinishedPulling="2026-01-23 06:23:21.386947367 +0000 UTC m=+204.619455361" observedRunningTime="2026-01-23 06:23:22.012612412 +0000 UTC m=+205.245120386" watchObservedRunningTime="2026-01-23 06:23:22.01401504 +0000 UTC m=+205.246523014" Jan 23 06:23:23 crc kubenswrapper[4784]: I0123 06:23:23.603488 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:23:23 crc kubenswrapper[4784]: I0123 06:23:23.603572 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:23:23 crc kubenswrapper[4784]: I0123 06:23:23.603645 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:23:23 crc kubenswrapper[4784]: I0123 06:23:23.604383 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:23:23 crc kubenswrapper[4784]: I0123 06:23:23.604447 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b" gracePeriod=600 Jan 23 06:23:24 crc kubenswrapper[4784]: I0123 06:23:24.000262 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv64c" event={"ID":"0771b3f9-d762-4e52-8433-b1802a8c2201","Type":"ContainerStarted","Data":"13fc31123d485582b5509138662ff70700a55025ced30884baebc7e0c5cf7d34"} Jan 23 06:23:25 crc kubenswrapper[4784]: I0123 06:23:25.007654 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b" exitCode=0 Jan 23 06:23:25 crc kubenswrapper[4784]: I0123 06:23:25.009097 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b"} Jan 23 06:23:25 crc kubenswrapper[4784]: I0123 06:23:25.027006 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bv64c" podStartSLOduration=6.123998908 podStartE2EDuration="56.026987244s" podCreationTimestamp="2026-01-23 06:22:29 +0000 UTC" firstStartedPulling="2026-01-23 06:22:32.676891066 +0000 UTC m=+155.909399040" lastFinishedPulling="2026-01-23 06:23:22.579879392 +0000 UTC m=+205.812387376" observedRunningTime="2026-01-23 06:23:25.024775264 +0000 UTC m=+208.257283248" watchObservedRunningTime="2026-01-23 06:23:25.026987244 +0000 UTC m=+208.259495228" Jan 23 06:23:25 crc kubenswrapper[4784]: I0123 06:23:25.172190 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4wkg9"] Jan 23 06:23:26 crc kubenswrapper[4784]: I0123 06:23:26.016387 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vfdf" event={"ID":"64c4e525-6765-4230-b129-3364819dfa47","Type":"ContainerStarted","Data":"202ab7f6b11fdbf49b5af60ff308819d46c01ff7a19b75b4deaf936da1ac6205"} Jan 23 06:23:26 crc kubenswrapper[4784]: I0123 06:23:26.019304 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"25adc041f328bcd1365d0e84326a3506984c28454ac67a405c2afd11863cc83e"} Jan 23 06:23:26 crc kubenswrapper[4784]: I0123 06:23:26.280103 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5vfdf" podStartSLOduration=5.12088226 podStartE2EDuration="57.280075702s" podCreationTimestamp="2026-01-23 06:22:29 +0000 UTC" firstStartedPulling="2026-01-23 06:22:32.600250409 +0000 UTC m=+155.832758383" lastFinishedPulling="2026-01-23 06:23:24.759443851 +0000 UTC m=+207.991951825" observedRunningTime="2026-01-23 06:23:26.040317644 +0000 UTC m=+209.272825608" watchObservedRunningTime="2026-01-23 06:23:26.280075702 +0000 UTC m=+209.512583676" Jan 23 06:23:26 crc kubenswrapper[4784]: I0123 06:23:26.474946 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bb5s2" Jan 23 06:23:28 crc kubenswrapper[4784]: I0123 06:23:28.037495 4784 generic.go:334] "Generic (PLEG): container finished" podID="9a96ea8e-c45f-4799-886a-aef90c8b8e1a" containerID="5bc6e3fe5a4b60082275b4a899db38d7f6345e55d38477be9db17282f84e8d4d" exitCode=0 Jan 23 06:23:28 crc kubenswrapper[4784]: I0123 06:23:28.037588 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rk5j" event={"ID":"9a96ea8e-c45f-4799-886a-aef90c8b8e1a","Type":"ContainerDied","Data":"5bc6e3fe5a4b60082275b4a899db38d7f6345e55d38477be9db17282f84e8d4d"} Jan 23 06:23:29 crc kubenswrapper[4784]: I0123 06:23:29.048283 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rk5j" event={"ID":"9a96ea8e-c45f-4799-886a-aef90c8b8e1a","Type":"ContainerStarted","Data":"a4a06a1a4de6bb96ecb6204f8ff993dae305732cf8e6acb3f982a6832ee43cb0"} Jan 23 06:23:29 crc kubenswrapper[4784]: I0123 06:23:29.083821 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2rk5j" podStartSLOduration=4.335154406 podStartE2EDuration="59.083787855s" podCreationTimestamp="2026-01-23 06:22:30 +0000 UTC" firstStartedPulling="2026-01-23 06:22:33.753587168 +0000 UTC m=+156.986095142" lastFinishedPulling="2026-01-23 06:23:28.502220617 +0000 UTC m=+211.734728591" observedRunningTime="2026-01-23 06:23:29.078000798 +0000 UTC m=+212.310508772" watchObservedRunningTime="2026-01-23 06:23:29.083787855 +0000 UTC m=+212.316295829" Jan 23 06:23:29 crc kubenswrapper[4784]: I0123 06:23:29.810844 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:23:29 crc kubenswrapper[4784]: I0123 06:23:29.811008 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:23:29 crc kubenswrapper[4784]: I0123 06:23:29.888531 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:23:29 crc kubenswrapper[4784]: I0123 06:23:29.888629 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:23:29 crc kubenswrapper[4784]: I0123 06:23:29.907463 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:23:29 crc kubenswrapper[4784]: I0123 06:23:29.983193 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:23:30 crc kubenswrapper[4784]: I0123 06:23:30.101806 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:23:30 crc kubenswrapper[4784]: I0123 06:23:30.104191 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:23:30 crc kubenswrapper[4784]: I0123 06:23:30.901620 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bv64c"] Jan 23 06:23:31 crc kubenswrapper[4784]: I0123 06:23:31.149391 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:23:31 crc kubenswrapper[4784]: I0123 06:23:31.150207 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:23:31 crc kubenswrapper[4784]: I0123 06:23:31.217493 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:23:31 crc kubenswrapper[4784]: I0123 06:23:31.707396 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:23:31 crc kubenswrapper[4784]: I0123 06:23:31.707498 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:23:31 crc kubenswrapper[4784]: I0123 06:23:31.762218 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:23:32 crc kubenswrapper[4784]: I0123 06:23:32.068488 4784 generic.go:334] "Generic (PLEG): container finished" podID="4bf2bb81-ee53-475a-9648-987ec2d1adb2" containerID="aeb2d829b569c11d2ce05e50936d51b3fcb12045bbe9f75aa574e5c6c8d8c814" exitCode=0 Jan 23 06:23:32 crc kubenswrapper[4784]: I0123 06:23:32.068573 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2bnm" event={"ID":"4bf2bb81-ee53-475a-9648-987ec2d1adb2","Type":"ContainerDied","Data":"aeb2d829b569c11d2ce05e50936d51b3fcb12045bbe9f75aa574e5c6c8d8c814"} Jan 23 06:23:32 crc kubenswrapper[4784]: I0123 06:23:32.073840 4784 generic.go:334] "Generic (PLEG): container finished" podID="8cd87290-f925-44f2-b7a6-ec3172726ad6" containerID="f3c3a8878e1ef61256eb05c998ca5f43f8821db5efdf5a9c7462599bd2e66afa" exitCode=0 Jan 23 06:23:32 crc kubenswrapper[4784]: I0123 06:23:32.073903 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whf7w" event={"ID":"8cd87290-f925-44f2-b7a6-ec3172726ad6","Type":"ContainerDied","Data":"f3c3a8878e1ef61256eb05c998ca5f43f8821db5efdf5a9c7462599bd2e66afa"} Jan 23 06:23:32 crc kubenswrapper[4784]: I0123 06:23:32.074813 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bv64c" podUID="0771b3f9-d762-4e52-8433-b1802a8c2201" containerName="registry-server" containerID="cri-o://13fc31123d485582b5509138662ff70700a55025ced30884baebc7e0c5cf7d34" gracePeriod=2 Jan 23 06:23:32 crc kubenswrapper[4784]: I0123 06:23:32.128601 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:23:33 crc kubenswrapper[4784]: I0123 06:23:33.093110 4784 generic.go:334] "Generic (PLEG): container finished" podID="0771b3f9-d762-4e52-8433-b1802a8c2201" containerID="13fc31123d485582b5509138662ff70700a55025ced30884baebc7e0c5cf7d34" exitCode=0 Jan 23 06:23:33 crc kubenswrapper[4784]: I0123 06:23:33.094943 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv64c" event={"ID":"0771b3f9-d762-4e52-8433-b1802a8c2201","Type":"ContainerDied","Data":"13fc31123d485582b5509138662ff70700a55025ced30884baebc7e0c5cf7d34"} Jan 23 06:23:33 crc kubenswrapper[4784]: I0123 06:23:33.144958 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:23:33 crc kubenswrapper[4784]: I0123 06:23:33.672942 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:23:33 crc kubenswrapper[4784]: I0123 06:23:33.696909 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0771b3f9-d762-4e52-8433-b1802a8c2201-catalog-content\") pod \"0771b3f9-d762-4e52-8433-b1802a8c2201\" (UID: \"0771b3f9-d762-4e52-8433-b1802a8c2201\") " Jan 23 06:23:33 crc kubenswrapper[4784]: I0123 06:23:33.697429 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4285s\" (UniqueName: \"kubernetes.io/projected/0771b3f9-d762-4e52-8433-b1802a8c2201-kube-api-access-4285s\") pod \"0771b3f9-d762-4e52-8433-b1802a8c2201\" (UID: \"0771b3f9-d762-4e52-8433-b1802a8c2201\") " Jan 23 06:23:33 crc kubenswrapper[4784]: I0123 06:23:33.697459 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0771b3f9-d762-4e52-8433-b1802a8c2201-utilities\") pod \"0771b3f9-d762-4e52-8433-b1802a8c2201\" (UID: \"0771b3f9-d762-4e52-8433-b1802a8c2201\") " Jan 23 06:23:33 crc kubenswrapper[4784]: I0123 06:23:33.698543 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0771b3f9-d762-4e52-8433-b1802a8c2201-utilities" (OuterVolumeSpecName: "utilities") pod "0771b3f9-d762-4e52-8433-b1802a8c2201" (UID: "0771b3f9-d762-4e52-8433-b1802a8c2201"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:23:33 crc kubenswrapper[4784]: I0123 06:23:33.708637 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0771b3f9-d762-4e52-8433-b1802a8c2201-kube-api-access-4285s" (OuterVolumeSpecName: "kube-api-access-4285s") pod "0771b3f9-d762-4e52-8433-b1802a8c2201" (UID: "0771b3f9-d762-4e52-8433-b1802a8c2201"). InnerVolumeSpecName "kube-api-access-4285s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:23:33 crc kubenswrapper[4784]: I0123 06:23:33.775518 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0771b3f9-d762-4e52-8433-b1802a8c2201-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0771b3f9-d762-4e52-8433-b1802a8c2201" (UID: "0771b3f9-d762-4e52-8433-b1802a8c2201"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:23:33 crc kubenswrapper[4784]: I0123 06:23:33.801706 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0771b3f9-d762-4e52-8433-b1802a8c2201-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:33 crc kubenswrapper[4784]: I0123 06:23:33.801770 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4285s\" (UniqueName: \"kubernetes.io/projected/0771b3f9-d762-4e52-8433-b1802a8c2201-kube-api-access-4285s\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:33 crc kubenswrapper[4784]: I0123 06:23:33.801788 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0771b3f9-d762-4e52-8433-b1802a8c2201-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:33 crc kubenswrapper[4784]: I0123 06:23:33.895184 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjhhq"] Jan 23 06:23:34 crc kubenswrapper[4784]: I0123 06:23:34.102019 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv64c" event={"ID":"0771b3f9-d762-4e52-8433-b1802a8c2201","Type":"ContainerDied","Data":"9720827a26222161782cdfd4bd2feccd7461c179784ff0aca8c139963121a4f1"} Jan 23 06:23:34 crc kubenswrapper[4784]: I0123 06:23:34.102086 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv64c" Jan 23 06:23:34 crc kubenswrapper[4784]: I0123 06:23:34.102086 4784 scope.go:117] "RemoveContainer" containerID="13fc31123d485582b5509138662ff70700a55025ced30884baebc7e0c5cf7d34" Jan 23 06:23:34 crc kubenswrapper[4784]: I0123 06:23:34.102468 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gjhhq" podUID="0a480350-f75f-4866-bca5-3c8a6793ad46" containerName="registry-server" containerID="cri-o://ab852ca50daf99a54bc12e850377e5d69a84ac82f1bff9de4332225e87d4d851" gracePeriod=2 Jan 23 06:23:34 crc kubenswrapper[4784]: I0123 06:23:34.134042 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bv64c"] Jan 23 06:23:34 crc kubenswrapper[4784]: I0123 06:23:34.138090 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bv64c"] Jan 23 06:23:35 crc kubenswrapper[4784]: I0123 06:23:35.264310 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0771b3f9-d762-4e52-8433-b1802a8c2201" path="/var/lib/kubelet/pods/0771b3f9-d762-4e52-8433-b1802a8c2201/volumes" Jan 23 06:23:35 crc kubenswrapper[4784]: I0123 06:23:35.369292 4784 scope.go:117] "RemoveContainer" containerID="4bc0ce3726e219bce9dec2bc0f0df3c7ddf65f707abdf04253f3010150628351" Jan 23 06:23:35 crc kubenswrapper[4784]: I0123 06:23:35.672469 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:23:35 crc kubenswrapper[4784]: I0123 06:23:35.731626 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a480350-f75f-4866-bca5-3c8a6793ad46-catalog-content\") pod \"0a480350-f75f-4866-bca5-3c8a6793ad46\" (UID: \"0a480350-f75f-4866-bca5-3c8a6793ad46\") " Jan 23 06:23:35 crc kubenswrapper[4784]: I0123 06:23:35.731959 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a480350-f75f-4866-bca5-3c8a6793ad46-utilities\") pod \"0a480350-f75f-4866-bca5-3c8a6793ad46\" (UID: \"0a480350-f75f-4866-bca5-3c8a6793ad46\") " Jan 23 06:23:35 crc kubenswrapper[4784]: I0123 06:23:35.732084 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lngv\" (UniqueName: \"kubernetes.io/projected/0a480350-f75f-4866-bca5-3c8a6793ad46-kube-api-access-6lngv\") pod \"0a480350-f75f-4866-bca5-3c8a6793ad46\" (UID: \"0a480350-f75f-4866-bca5-3c8a6793ad46\") " Jan 23 06:23:35 crc kubenswrapper[4784]: I0123 06:23:35.734169 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a480350-f75f-4866-bca5-3c8a6793ad46-utilities" (OuterVolumeSpecName: "utilities") pod "0a480350-f75f-4866-bca5-3c8a6793ad46" (UID: "0a480350-f75f-4866-bca5-3c8a6793ad46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:23:35 crc kubenswrapper[4784]: I0123 06:23:35.742122 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a480350-f75f-4866-bca5-3c8a6793ad46-kube-api-access-6lngv" (OuterVolumeSpecName: "kube-api-access-6lngv") pod "0a480350-f75f-4866-bca5-3c8a6793ad46" (UID: "0a480350-f75f-4866-bca5-3c8a6793ad46"). InnerVolumeSpecName "kube-api-access-6lngv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:23:35 crc kubenswrapper[4784]: I0123 06:23:35.762964 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a480350-f75f-4866-bca5-3c8a6793ad46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a480350-f75f-4866-bca5-3c8a6793ad46" (UID: "0a480350-f75f-4866-bca5-3c8a6793ad46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:23:35 crc kubenswrapper[4784]: I0123 06:23:35.833575 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a480350-f75f-4866-bca5-3c8a6793ad46-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:35 crc kubenswrapper[4784]: I0123 06:23:35.833631 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a480350-f75f-4866-bca5-3c8a6793ad46-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:35 crc kubenswrapper[4784]: I0123 06:23:35.833653 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lngv\" (UniqueName: \"kubernetes.io/projected/0a480350-f75f-4866-bca5-3c8a6793ad46-kube-api-access-6lngv\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:35 crc kubenswrapper[4784]: I0123 06:23:35.937298 4784 scope.go:117] "RemoveContainer" containerID="690059e8cb4156cf8a188e4f08c27ec969aa14013639c0c8910906b7691bf208" Jan 23 06:23:36 crc kubenswrapper[4784]: I0123 06:23:36.120478 4784 generic.go:334] "Generic (PLEG): container finished" podID="0a480350-f75f-4866-bca5-3c8a6793ad46" containerID="ab852ca50daf99a54bc12e850377e5d69a84ac82f1bff9de4332225e87d4d851" exitCode=0 Jan 23 06:23:36 crc kubenswrapper[4784]: I0123 06:23:36.120636 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjhhq" event={"ID":"0a480350-f75f-4866-bca5-3c8a6793ad46","Type":"ContainerDied","Data":"ab852ca50daf99a54bc12e850377e5d69a84ac82f1bff9de4332225e87d4d851"} Jan 23 06:23:36 crc kubenswrapper[4784]: I0123 06:23:36.121093 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjhhq" event={"ID":"0a480350-f75f-4866-bca5-3c8a6793ad46","Type":"ContainerDied","Data":"84ac10c5c488300bf63eebc0234aeebc98691e2b535f675454507f0035c2ce09"} Jan 23 06:23:36 crc kubenswrapper[4784]: I0123 06:23:36.121131 4784 scope.go:117] "RemoveContainer" containerID="ab852ca50daf99a54bc12e850377e5d69a84ac82f1bff9de4332225e87d4d851" Jan 23 06:23:36 crc kubenswrapper[4784]: I0123 06:23:36.120780 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjhhq" Jan 23 06:23:36 crc kubenswrapper[4784]: I0123 06:23:36.160956 4784 scope.go:117] "RemoveContainer" containerID="cd76713faf6fe538ad02013070d5c7ad1ce71046004ba586f0b1501c5649fc2c" Jan 23 06:23:36 crc kubenswrapper[4784]: I0123 06:23:36.160977 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjhhq"] Jan 23 06:23:36 crc kubenswrapper[4784]: I0123 06:23:36.167486 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjhhq"] Jan 23 06:23:36 crc kubenswrapper[4784]: I0123 06:23:36.181001 4784 scope.go:117] "RemoveContainer" containerID="6c95480ef4afe90a0d24ccd05ca1e1100f0e5a6f01ff9e89975c8dd93dd1ded0" Jan 23 06:23:36 crc kubenswrapper[4784]: I0123 06:23:36.195468 4784 scope.go:117] "RemoveContainer" containerID="ab852ca50daf99a54bc12e850377e5d69a84ac82f1bff9de4332225e87d4d851" Jan 23 06:23:36 crc kubenswrapper[4784]: E0123 06:23:36.195902 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab852ca50daf99a54bc12e850377e5d69a84ac82f1bff9de4332225e87d4d851\": container with ID starting with ab852ca50daf99a54bc12e850377e5d69a84ac82f1bff9de4332225e87d4d851 not found: ID does not exist" containerID="ab852ca50daf99a54bc12e850377e5d69a84ac82f1bff9de4332225e87d4d851" Jan 23 06:23:36 crc kubenswrapper[4784]: I0123 06:23:36.195958 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab852ca50daf99a54bc12e850377e5d69a84ac82f1bff9de4332225e87d4d851"} err="failed to get container status \"ab852ca50daf99a54bc12e850377e5d69a84ac82f1bff9de4332225e87d4d851\": rpc error: code = NotFound desc = could not find container \"ab852ca50daf99a54bc12e850377e5d69a84ac82f1bff9de4332225e87d4d851\": container with ID starting with ab852ca50daf99a54bc12e850377e5d69a84ac82f1bff9de4332225e87d4d851 not found: ID does not exist" Jan 23 06:23:36 crc kubenswrapper[4784]: I0123 06:23:36.195999 4784 scope.go:117] "RemoveContainer" containerID="cd76713faf6fe538ad02013070d5c7ad1ce71046004ba586f0b1501c5649fc2c" Jan 23 06:23:36 crc kubenswrapper[4784]: E0123 06:23:36.196270 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd76713faf6fe538ad02013070d5c7ad1ce71046004ba586f0b1501c5649fc2c\": container with ID starting with cd76713faf6fe538ad02013070d5c7ad1ce71046004ba586f0b1501c5649fc2c not found: ID does not exist" containerID="cd76713faf6fe538ad02013070d5c7ad1ce71046004ba586f0b1501c5649fc2c" Jan 23 06:23:36 crc kubenswrapper[4784]: I0123 06:23:36.196299 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd76713faf6fe538ad02013070d5c7ad1ce71046004ba586f0b1501c5649fc2c"} err="failed to get container status \"cd76713faf6fe538ad02013070d5c7ad1ce71046004ba586f0b1501c5649fc2c\": rpc error: code = NotFound desc = could not find container \"cd76713faf6fe538ad02013070d5c7ad1ce71046004ba586f0b1501c5649fc2c\": container with ID starting with cd76713faf6fe538ad02013070d5c7ad1ce71046004ba586f0b1501c5649fc2c not found: ID does not exist" Jan 23 06:23:36 crc kubenswrapper[4784]: I0123 06:23:36.196314 4784 scope.go:117] "RemoveContainer" containerID="6c95480ef4afe90a0d24ccd05ca1e1100f0e5a6f01ff9e89975c8dd93dd1ded0" Jan 23 06:23:36 crc kubenswrapper[4784]: E0123 06:23:36.196588 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c95480ef4afe90a0d24ccd05ca1e1100f0e5a6f01ff9e89975c8dd93dd1ded0\": container with ID starting with 6c95480ef4afe90a0d24ccd05ca1e1100f0e5a6f01ff9e89975c8dd93dd1ded0 not found: ID does not exist" containerID="6c95480ef4afe90a0d24ccd05ca1e1100f0e5a6f01ff9e89975c8dd93dd1ded0" Jan 23 06:23:36 crc kubenswrapper[4784]: I0123 06:23:36.196622 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c95480ef4afe90a0d24ccd05ca1e1100f0e5a6f01ff9e89975c8dd93dd1ded0"} err="failed to get container status \"6c95480ef4afe90a0d24ccd05ca1e1100f0e5a6f01ff9e89975c8dd93dd1ded0\": rpc error: code = NotFound desc = could not find container \"6c95480ef4afe90a0d24ccd05ca1e1100f0e5a6f01ff9e89975c8dd93dd1ded0\": container with ID starting with 6c95480ef4afe90a0d24ccd05ca1e1100f0e5a6f01ff9e89975c8dd93dd1ded0 not found: ID does not exist" Jan 23 06:23:37 crc kubenswrapper[4784]: I0123 06:23:37.148947 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2bnm" event={"ID":"4bf2bb81-ee53-475a-9648-987ec2d1adb2","Type":"ContainerStarted","Data":"7e468db067e2bf1c6e3a73d40165b87bf320d022c181546e5cfd600002d12fcf"} Jan 23 06:23:37 crc kubenswrapper[4784]: I0123 06:23:37.264321 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a480350-f75f-4866-bca5-3c8a6793ad46" path="/var/lib/kubelet/pods/0a480350-f75f-4866-bca5-3c8a6793ad46/volumes" Jan 23 06:23:38 crc kubenswrapper[4784]: I0123 06:23:38.160645 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkrz6" event={"ID":"de559431-551a-4057-96ec-37537d6eddc8","Type":"ContainerStarted","Data":"f22f3a8e00397b91b5f8a836ecaa1d0e9d55ba1a3464fc3d9c7323d058e9851c"} Jan 23 06:23:38 crc kubenswrapper[4784]: I0123 06:23:38.163919 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vbpl" event={"ID":"f2daccf7-5481-4092-a720-045f3e033b62","Type":"ContainerStarted","Data":"ef854161c7cb65566e9af2a5187bf29536ecc8b42d7c3abf8275cb4ea0b987ea"} Jan 23 06:23:38 crc kubenswrapper[4784]: I0123 06:23:38.247874 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h2bnm" podStartSLOduration=6.946257199 podStartE2EDuration="1m10.247850139s" podCreationTimestamp="2026-01-23 06:22:28 +0000 UTC" firstStartedPulling="2026-01-23 06:22:32.637299188 +0000 UTC m=+155.869807162" lastFinishedPulling="2026-01-23 06:23:35.938892088 +0000 UTC m=+219.171400102" observedRunningTime="2026-01-23 06:23:38.243870881 +0000 UTC m=+221.476378875" watchObservedRunningTime="2026-01-23 06:23:38.247850139 +0000 UTC m=+221.480358113" Jan 23 06:23:39 crc kubenswrapper[4784]: I0123 06:23:39.175279 4784 generic.go:334] "Generic (PLEG): container finished" podID="de559431-551a-4057-96ec-37537d6eddc8" containerID="f22f3a8e00397b91b5f8a836ecaa1d0e9d55ba1a3464fc3d9c7323d058e9851c" exitCode=0 Jan 23 06:23:39 crc kubenswrapper[4784]: I0123 06:23:39.175374 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkrz6" event={"ID":"de559431-551a-4057-96ec-37537d6eddc8","Type":"ContainerDied","Data":"f22f3a8e00397b91b5f8a836ecaa1d0e9d55ba1a3464fc3d9c7323d058e9851c"} Jan 23 06:23:39 crc kubenswrapper[4784]: I0123 06:23:39.177965 4784 generic.go:334] "Generic (PLEG): container finished" podID="f2daccf7-5481-4092-a720-045f3e033b62" containerID="ef854161c7cb65566e9af2a5187bf29536ecc8b42d7c3abf8275cb4ea0b987ea" exitCode=0 Jan 23 06:23:39 crc kubenswrapper[4784]: I0123 06:23:39.178000 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vbpl" event={"ID":"f2daccf7-5481-4092-a720-045f3e033b62","Type":"ContainerDied","Data":"ef854161c7cb65566e9af2a5187bf29536ecc8b42d7c3abf8275cb4ea0b987ea"} Jan 23 06:23:39 crc kubenswrapper[4784]: I0123 06:23:39.587472 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:23:39 crc kubenswrapper[4784]: I0123 06:23:39.587583 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:23:39 crc kubenswrapper[4784]: I0123 06:23:39.651221 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:23:42 crc kubenswrapper[4784]: I0123 06:23:42.202356 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whf7w" event={"ID":"8cd87290-f925-44f2-b7a6-ec3172726ad6","Type":"ContainerStarted","Data":"18744e45a2e32afe4ecef4f8dfae18f79c1ece5f327b64160ab8fa39fc8e93e7"} Jan 23 06:23:42 crc kubenswrapper[4784]: I0123 06:23:42.225866 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-whf7w" podStartSLOduration=4.750899316 podStartE2EDuration="1m13.225833458s" podCreationTimestamp="2026-01-23 06:22:29 +0000 UTC" firstStartedPulling="2026-01-23 06:22:32.616955074 +0000 UTC m=+155.849463038" lastFinishedPulling="2026-01-23 06:23:41.091889186 +0000 UTC m=+224.324397180" observedRunningTime="2026-01-23 06:23:42.225019017 +0000 UTC m=+225.457526991" watchObservedRunningTime="2026-01-23 06:23:42.225833458 +0000 UTC m=+225.458341442" Jan 23 06:23:43 crc kubenswrapper[4784]: I0123 06:23:43.212787 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkrz6" event={"ID":"de559431-551a-4057-96ec-37537d6eddc8","Type":"ContainerStarted","Data":"9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff"} Jan 23 06:23:44 crc kubenswrapper[4784]: I0123 06:23:44.243591 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zkrz6" podStartSLOduration=5.693673756 podStartE2EDuration="1m12.243564445s" podCreationTimestamp="2026-01-23 06:22:32 +0000 UTC" firstStartedPulling="2026-01-23 06:22:36.026388494 +0000 UTC m=+159.258896468" lastFinishedPulling="2026-01-23 06:23:42.576279183 +0000 UTC m=+225.808787157" observedRunningTime="2026-01-23 06:23:44.241858348 +0000 UTC m=+227.474366322" watchObservedRunningTime="2026-01-23 06:23:44.243564445 +0000 UTC m=+227.476072419" Jan 23 06:23:48 crc kubenswrapper[4784]: I0123 06:23:48.253561 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vbpl" event={"ID":"f2daccf7-5481-4092-a720-045f3e033b62","Type":"ContainerStarted","Data":"2ea00f16a1ba13f077f3571aaf1d472b5169b22b4cd6bb4c5b9f6b8bbbf609e0"} Jan 23 06:23:48 crc kubenswrapper[4784]: I0123 06:23:48.281165 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6vbpl" podStartSLOduration=4.736789595 podStartE2EDuration="1m16.281135303s" podCreationTimestamp="2026-01-23 06:22:32 +0000 UTC" firstStartedPulling="2026-01-23 06:22:34.820470773 +0000 UTC m=+158.052978747" lastFinishedPulling="2026-01-23 06:23:46.364816481 +0000 UTC m=+229.597324455" observedRunningTime="2026-01-23 06:23:48.2810052 +0000 UTC m=+231.513513184" watchObservedRunningTime="2026-01-23 06:23:48.281135303 +0000 UTC m=+231.513643277" Jan 23 06:23:49 crc kubenswrapper[4784]: I0123 06:23:49.639618 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:23:49 crc kubenswrapper[4784]: I0123 06:23:49.979564 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:23:49 crc kubenswrapper[4784]: I0123 06:23:49.979654 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.019695 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.218488 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" podUID="32f1325e-ec9d-4375-855d-970361b2ac03" containerName="oauth-openshift" containerID="cri-o://c46b9dd2b84bdf2ba1696f23c98fb0446747f77b83e18ae910710aef5bb480b2" gracePeriod=15 Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.323603 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.682139 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.731147 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6"] Jan 23 06:23:50 crc kubenswrapper[4784]: E0123 06:23:50.731500 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f1325e-ec9d-4375-855d-970361b2ac03" containerName="oauth-openshift" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.731517 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f1325e-ec9d-4375-855d-970361b2ac03" containerName="oauth-openshift" Jan 23 06:23:50 crc kubenswrapper[4784]: E0123 06:23:50.731531 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d17c4a-d0fc-4232-8194-5b2898e72307" containerName="pruner" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.731537 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d17c4a-d0fc-4232-8194-5b2898e72307" containerName="pruner" Jan 23 06:23:50 crc kubenswrapper[4784]: E0123 06:23:50.731552 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0771b3f9-d762-4e52-8433-b1802a8c2201" containerName="extract-utilities" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.731562 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0771b3f9-d762-4e52-8433-b1802a8c2201" containerName="extract-utilities" Jan 23 06:23:50 crc kubenswrapper[4784]: E0123 06:23:50.731577 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0771b3f9-d762-4e52-8433-b1802a8c2201" containerName="extract-content" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.731585 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0771b3f9-d762-4e52-8433-b1802a8c2201" containerName="extract-content" Jan 23 06:23:50 crc kubenswrapper[4784]: E0123 06:23:50.731597 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a480350-f75f-4866-bca5-3c8a6793ad46" containerName="extract-utilities" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.731604 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a480350-f75f-4866-bca5-3c8a6793ad46" containerName="extract-utilities" Jan 23 06:23:50 crc kubenswrapper[4784]: E0123 06:23:50.731615 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a480350-f75f-4866-bca5-3c8a6793ad46" containerName="registry-server" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.731622 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a480350-f75f-4866-bca5-3c8a6793ad46" containerName="registry-server" Jan 23 06:23:50 crc kubenswrapper[4784]: E0123 06:23:50.731632 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a480350-f75f-4866-bca5-3c8a6793ad46" containerName="extract-content" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.731639 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a480350-f75f-4866-bca5-3c8a6793ad46" containerName="extract-content" Jan 23 06:23:50 crc kubenswrapper[4784]: E0123 06:23:50.731646 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0771b3f9-d762-4e52-8433-b1802a8c2201" containerName="registry-server" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.731653 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0771b3f9-d762-4e52-8433-b1802a8c2201" containerName="registry-server" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.731781 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d17c4a-d0fc-4232-8194-5b2898e72307" containerName="pruner" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.731806 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a480350-f75f-4866-bca5-3c8a6793ad46" containerName="registry-server" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.731815 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="32f1325e-ec9d-4375-855d-970361b2ac03" containerName="oauth-openshift" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.731824 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0771b3f9-d762-4e52-8433-b1802a8c2201" containerName="registry-server" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.732295 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.744399 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6"] Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.762100 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-user-template-error\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.762178 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/858edacc-ac93-4885-82b6-eea41f7eabdc-audit-policies\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.762230 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.762265 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.762394 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-user-template-login\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.762544 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.762589 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.762702 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.762788 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/858edacc-ac93-4885-82b6-eea41f7eabdc-audit-dir\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.762852 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pkrt\" (UniqueName: \"kubernetes.io/projected/858edacc-ac93-4885-82b6-eea41f7eabdc-kube-api-access-2pkrt\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.762893 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.762937 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-session\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.762961 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.763010 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.863767 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-provider-selection\") pod \"32f1325e-ec9d-4375-855d-970361b2ac03\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.863886 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-cliconfig\") pod \"32f1325e-ec9d-4375-855d-970361b2ac03\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.863919 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-serving-cert\") pod \"32f1325e-ec9d-4375-855d-970361b2ac03\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.863944 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hd8c\" (UniqueName: \"kubernetes.io/projected/32f1325e-ec9d-4375-855d-970361b2ac03-kube-api-access-7hd8c\") pod \"32f1325e-ec9d-4375-855d-970361b2ac03\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.864024 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-session\") pod \"32f1325e-ec9d-4375-855d-970361b2ac03\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.864076 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-error\") pod \"32f1325e-ec9d-4375-855d-970361b2ac03\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.864119 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-router-certs\") pod \"32f1325e-ec9d-4375-855d-970361b2ac03\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.864159 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-idp-0-file-data\") pod \"32f1325e-ec9d-4375-855d-970361b2ac03\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.864192 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-audit-policies\") pod \"32f1325e-ec9d-4375-855d-970361b2ac03\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.865197 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "32f1325e-ec9d-4375-855d-970361b2ac03" (UID: "32f1325e-ec9d-4375-855d-970361b2ac03"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.865220 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "32f1325e-ec9d-4375-855d-970361b2ac03" (UID: "32f1325e-ec9d-4375-855d-970361b2ac03"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.865302 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-service-ca\") pod \"32f1325e-ec9d-4375-855d-970361b2ac03\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.865343 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32f1325e-ec9d-4375-855d-970361b2ac03-audit-dir\") pod \"32f1325e-ec9d-4375-855d-970361b2ac03\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.865390 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-trusted-ca-bundle\") pod \"32f1325e-ec9d-4375-855d-970361b2ac03\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.865459 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32f1325e-ec9d-4375-855d-970361b2ac03-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "32f1325e-ec9d-4375-855d-970361b2ac03" (UID: "32f1325e-ec9d-4375-855d-970361b2ac03"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.865553 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-ocp-branding-template\") pod \"32f1325e-ec9d-4375-855d-970361b2ac03\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.866161 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "32f1325e-ec9d-4375-855d-970361b2ac03" (UID: "32f1325e-ec9d-4375-855d-970361b2ac03"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.866273 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-login\") pod \"32f1325e-ec9d-4375-855d-970361b2ac03\" (UID: \"32f1325e-ec9d-4375-855d-970361b2ac03\") " Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.866298 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "32f1325e-ec9d-4375-855d-970361b2ac03" (UID: "32f1325e-ec9d-4375-855d-970361b2ac03"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.866569 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.866640 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/858edacc-ac93-4885-82b6-eea41f7eabdc-audit-dir\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.866671 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pkrt\" (UniqueName: \"kubernetes.io/projected/858edacc-ac93-4885-82b6-eea41f7eabdc-kube-api-access-2pkrt\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.866697 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.866732 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-session\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.866777 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.866810 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.866861 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-user-template-error\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.866890 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/858edacc-ac93-4885-82b6-eea41f7eabdc-audit-policies\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.866927 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.866958 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.866997 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-user-template-login\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.867041 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.867071 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.867130 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.867148 4784 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.867163 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.867178 4784 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32f1325e-ec9d-4375-855d-970361b2ac03-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.867195 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.867698 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.868301 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.871674 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/858edacc-ac93-4885-82b6-eea41f7eabdc-audit-policies\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.872053 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32f1325e-ec9d-4375-855d-970361b2ac03-kube-api-access-7hd8c" (OuterVolumeSpecName: "kube-api-access-7hd8c") pod "32f1325e-ec9d-4375-855d-970361b2ac03" (UID: "32f1325e-ec9d-4375-855d-970361b2ac03"). InnerVolumeSpecName "kube-api-access-7hd8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.872380 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "32f1325e-ec9d-4375-855d-970361b2ac03" (UID: "32f1325e-ec9d-4375-855d-970361b2ac03"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.873813 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "32f1325e-ec9d-4375-855d-970361b2ac03" (UID: "32f1325e-ec9d-4375-855d-970361b2ac03"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.874907 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.875415 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-user-template-login\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.876286 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.876408 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "32f1325e-ec9d-4375-855d-970361b2ac03" (UID: "32f1325e-ec9d-4375-855d-970361b2ac03"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.878117 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-user-template-error\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.878288 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "32f1325e-ec9d-4375-855d-970361b2ac03" (UID: "32f1325e-ec9d-4375-855d-970361b2ac03"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.878678 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.878733 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-session\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.879915 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "32f1325e-ec9d-4375-855d-970361b2ac03" (UID: "32f1325e-ec9d-4375-855d-970361b2ac03"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.876472 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/858edacc-ac93-4885-82b6-eea41f7eabdc-audit-dir\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.882990 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "32f1325e-ec9d-4375-855d-970361b2ac03" (UID: "32f1325e-ec9d-4375-855d-970361b2ac03"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.883734 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.884221 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.887395 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/858edacc-ac93-4885-82b6-eea41f7eabdc-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.889890 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "32f1325e-ec9d-4375-855d-970361b2ac03" (UID: "32f1325e-ec9d-4375-855d-970361b2ac03"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.890339 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "32f1325e-ec9d-4375-855d-970361b2ac03" (UID: "32f1325e-ec9d-4375-855d-970361b2ac03"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.903072 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pkrt\" (UniqueName: \"kubernetes.io/projected/858edacc-ac93-4885-82b6-eea41f7eabdc-kube-api-access-2pkrt\") pod \"oauth-openshift-5cf8f9f8d-5d2r6\" (UID: \"858edacc-ac93-4885-82b6-eea41f7eabdc\") " pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.968339 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.968378 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.968396 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.968407 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.968419 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.968433 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.968448 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.968464 4784 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/32f1325e-ec9d-4375-855d-970361b2ac03-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:50 crc kubenswrapper[4784]: I0123 06:23:50.968474 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hd8c\" (UniqueName: \"kubernetes.io/projected/32f1325e-ec9d-4375-855d-970361b2ac03-kube-api-access-7hd8c\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:51 crc kubenswrapper[4784]: I0123 06:23:51.061264 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:51 crc kubenswrapper[4784]: I0123 06:23:51.278395 4784 generic.go:334] "Generic (PLEG): container finished" podID="32f1325e-ec9d-4375-855d-970361b2ac03" containerID="c46b9dd2b84bdf2ba1696f23c98fb0446747f77b83e18ae910710aef5bb480b2" exitCode=0 Jan 23 06:23:51 crc kubenswrapper[4784]: I0123 06:23:51.279630 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" Jan 23 06:23:51 crc kubenswrapper[4784]: I0123 06:23:51.280092 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" event={"ID":"32f1325e-ec9d-4375-855d-970361b2ac03","Type":"ContainerDied","Data":"c46b9dd2b84bdf2ba1696f23c98fb0446747f77b83e18ae910710aef5bb480b2"} Jan 23 06:23:51 crc kubenswrapper[4784]: I0123 06:23:51.280173 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4wkg9" event={"ID":"32f1325e-ec9d-4375-855d-970361b2ac03","Type":"ContainerDied","Data":"77031adbce31a831e7525bc0904a08c82d478b33a6803f69b375081501245680"} Jan 23 06:23:51 crc kubenswrapper[4784]: I0123 06:23:51.280193 4784 scope.go:117] "RemoveContainer" containerID="c46b9dd2b84bdf2ba1696f23c98fb0446747f77b83e18ae910710aef5bb480b2" Jan 23 06:23:51 crc kubenswrapper[4784]: I0123 06:23:51.321958 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4wkg9"] Jan 23 06:23:51 crc kubenswrapper[4784]: I0123 06:23:51.326143 4784 scope.go:117] "RemoveContainer" containerID="c46b9dd2b84bdf2ba1696f23c98fb0446747f77b83e18ae910710aef5bb480b2" Jan 23 06:23:51 crc kubenswrapper[4784]: E0123 06:23:51.326636 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c46b9dd2b84bdf2ba1696f23c98fb0446747f77b83e18ae910710aef5bb480b2\": container with ID starting with c46b9dd2b84bdf2ba1696f23c98fb0446747f77b83e18ae910710aef5bb480b2 not found: ID does not exist" containerID="c46b9dd2b84bdf2ba1696f23c98fb0446747f77b83e18ae910710aef5bb480b2" Jan 23 06:23:51 crc kubenswrapper[4784]: I0123 06:23:51.326681 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c46b9dd2b84bdf2ba1696f23c98fb0446747f77b83e18ae910710aef5bb480b2"} err="failed to get container status \"c46b9dd2b84bdf2ba1696f23c98fb0446747f77b83e18ae910710aef5bb480b2\": rpc error: code = NotFound desc = could not find container \"c46b9dd2b84bdf2ba1696f23c98fb0446747f77b83e18ae910710aef5bb480b2\": container with ID starting with c46b9dd2b84bdf2ba1696f23c98fb0446747f77b83e18ae910710aef5bb480b2 not found: ID does not exist" Jan 23 06:23:51 crc kubenswrapper[4784]: I0123 06:23:51.330881 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4wkg9"] Jan 23 06:23:51 crc kubenswrapper[4784]: I0123 06:23:51.333892 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6"] Jan 23 06:23:51 crc kubenswrapper[4784]: I0123 06:23:51.506899 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-whf7w"] Jan 23 06:23:52 crc kubenswrapper[4784]: I0123 06:23:52.291196 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" event={"ID":"858edacc-ac93-4885-82b6-eea41f7eabdc","Type":"ContainerStarted","Data":"6a4f0a17339e568aa115503ffd3805a3a9c96d8e3541e199ffd7885acd358f00"} Jan 23 06:23:52 crc kubenswrapper[4784]: I0123 06:23:52.291294 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" event={"ID":"858edacc-ac93-4885-82b6-eea41f7eabdc","Type":"ContainerStarted","Data":"a1d10e18b1101ecc1ac4fb8ec19b75bcc4f18795febffc5a266f8f10617cfc6d"} Jan 23 06:23:52 crc kubenswrapper[4784]: I0123 06:23:52.291416 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-whf7w" podUID="8cd87290-f925-44f2-b7a6-ec3172726ad6" containerName="registry-server" containerID="cri-o://18744e45a2e32afe4ecef4f8dfae18f79c1ece5f327b64160ab8fa39fc8e93e7" gracePeriod=2 Jan 23 06:23:52 crc kubenswrapper[4784]: I0123 06:23:52.328508 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" podStartSLOduration=27.328479288 podStartE2EDuration="27.328479288s" podCreationTimestamp="2026-01-23 06:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:23:52.328280303 +0000 UTC m=+235.560788307" watchObservedRunningTime="2026-01-23 06:23:52.328479288 +0000 UTC m=+235.560987302" Jan 23 06:23:52 crc kubenswrapper[4784]: I0123 06:23:52.454537 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:23:52 crc kubenswrapper[4784]: I0123 06:23:52.455066 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:23:52 crc kubenswrapper[4784]: I0123 06:23:52.836327 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:23:52 crc kubenswrapper[4784]: I0123 06:23:52.909059 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd87290-f925-44f2-b7a6-ec3172726ad6-catalog-content\") pod \"8cd87290-f925-44f2-b7a6-ec3172726ad6\" (UID: \"8cd87290-f925-44f2-b7a6-ec3172726ad6\") " Jan 23 06:23:52 crc kubenswrapper[4784]: I0123 06:23:52.909322 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd87290-f925-44f2-b7a6-ec3172726ad6-utilities\") pod \"8cd87290-f925-44f2-b7a6-ec3172726ad6\" (UID: \"8cd87290-f925-44f2-b7a6-ec3172726ad6\") " Jan 23 06:23:52 crc kubenswrapper[4784]: I0123 06:23:52.909365 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s55qz\" (UniqueName: \"kubernetes.io/projected/8cd87290-f925-44f2-b7a6-ec3172726ad6-kube-api-access-s55qz\") pod \"8cd87290-f925-44f2-b7a6-ec3172726ad6\" (UID: \"8cd87290-f925-44f2-b7a6-ec3172726ad6\") " Jan 23 06:23:52 crc kubenswrapper[4784]: I0123 06:23:52.910607 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cd87290-f925-44f2-b7a6-ec3172726ad6-utilities" (OuterVolumeSpecName: "utilities") pod "8cd87290-f925-44f2-b7a6-ec3172726ad6" (UID: "8cd87290-f925-44f2-b7a6-ec3172726ad6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:23:52 crc kubenswrapper[4784]: I0123 06:23:52.920040 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cd87290-f925-44f2-b7a6-ec3172726ad6-kube-api-access-s55qz" (OuterVolumeSpecName: "kube-api-access-s55qz") pod "8cd87290-f925-44f2-b7a6-ec3172726ad6" (UID: "8cd87290-f925-44f2-b7a6-ec3172726ad6"). InnerVolumeSpecName "kube-api-access-s55qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:23:52 crc kubenswrapper[4784]: I0123 06:23:52.952499 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cd87290-f925-44f2-b7a6-ec3172726ad6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cd87290-f925-44f2-b7a6-ec3172726ad6" (UID: "8cd87290-f925-44f2-b7a6-ec3172726ad6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:52.999838 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:52.999950 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.010823 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cd87290-f925-44f2-b7a6-ec3172726ad6-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.011099 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s55qz\" (UniqueName: \"kubernetes.io/projected/8cd87290-f925-44f2-b7a6-ec3172726ad6-kube-api-access-s55qz\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.011250 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cd87290-f925-44f2-b7a6-ec3172726ad6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.062176 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.263919 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32f1325e-ec9d-4375-855d-970361b2ac03" path="/var/lib/kubelet/pods/32f1325e-ec9d-4375-855d-970361b2ac03/volumes" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.300107 4784 generic.go:334] "Generic (PLEG): container finished" podID="8cd87290-f925-44f2-b7a6-ec3172726ad6" containerID="18744e45a2e32afe4ecef4f8dfae18f79c1ece5f327b64160ab8fa39fc8e93e7" exitCode=0 Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.300175 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whf7w" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.300201 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whf7w" event={"ID":"8cd87290-f925-44f2-b7a6-ec3172726ad6","Type":"ContainerDied","Data":"18744e45a2e32afe4ecef4f8dfae18f79c1ece5f327b64160ab8fa39fc8e93e7"} Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.300271 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whf7w" event={"ID":"8cd87290-f925-44f2-b7a6-ec3172726ad6","Type":"ContainerDied","Data":"ca5523cb11dac990f9773164ea5e2eaad7a52e9c5ff4f17d4590a06c20752eb0"} Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.300298 4784 scope.go:117] "RemoveContainer" containerID="18744e45a2e32afe4ecef4f8dfae18f79c1ece5f327b64160ab8fa39fc8e93e7" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.301376 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.309715 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.319236 4784 scope.go:117] "RemoveContainer" containerID="f3c3a8878e1ef61256eb05c998ca5f43f8821db5efdf5a9c7462599bd2e66afa" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.331478 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-whf7w"] Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.337056 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-whf7w"] Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.344958 4784 scope.go:117] "RemoveContainer" containerID="1a356bfb43910d03a67832ca71365806328527307891aa32a9cfff62e9935091" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.369920 4784 scope.go:117] "RemoveContainer" containerID="18744e45a2e32afe4ecef4f8dfae18f79c1ece5f327b64160ab8fa39fc8e93e7" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.370380 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:23:53 crc kubenswrapper[4784]: E0123 06:23:53.370735 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18744e45a2e32afe4ecef4f8dfae18f79c1ece5f327b64160ab8fa39fc8e93e7\": container with ID starting with 18744e45a2e32afe4ecef4f8dfae18f79c1ece5f327b64160ab8fa39fc8e93e7 not found: ID does not exist" containerID="18744e45a2e32afe4ecef4f8dfae18f79c1ece5f327b64160ab8fa39fc8e93e7" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.370785 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18744e45a2e32afe4ecef4f8dfae18f79c1ece5f327b64160ab8fa39fc8e93e7"} err="failed to get container status \"18744e45a2e32afe4ecef4f8dfae18f79c1ece5f327b64160ab8fa39fc8e93e7\": rpc error: code = NotFound desc = could not find container \"18744e45a2e32afe4ecef4f8dfae18f79c1ece5f327b64160ab8fa39fc8e93e7\": container with ID starting with 18744e45a2e32afe4ecef4f8dfae18f79c1ece5f327b64160ab8fa39fc8e93e7 not found: ID does not exist" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.370821 4784 scope.go:117] "RemoveContainer" containerID="f3c3a8878e1ef61256eb05c998ca5f43f8821db5efdf5a9c7462599bd2e66afa" Jan 23 06:23:53 crc kubenswrapper[4784]: E0123 06:23:53.372097 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c3a8878e1ef61256eb05c998ca5f43f8821db5efdf5a9c7462599bd2e66afa\": container with ID starting with f3c3a8878e1ef61256eb05c998ca5f43f8821db5efdf5a9c7462599bd2e66afa not found: ID does not exist" containerID="f3c3a8878e1ef61256eb05c998ca5f43f8821db5efdf5a9c7462599bd2e66afa" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.372135 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c3a8878e1ef61256eb05c998ca5f43f8821db5efdf5a9c7462599bd2e66afa"} err="failed to get container status \"f3c3a8878e1ef61256eb05c998ca5f43f8821db5efdf5a9c7462599bd2e66afa\": rpc error: code = NotFound desc = could not find container \"f3c3a8878e1ef61256eb05c998ca5f43f8821db5efdf5a9c7462599bd2e66afa\": container with ID starting with f3c3a8878e1ef61256eb05c998ca5f43f8821db5efdf5a9c7462599bd2e66afa not found: ID does not exist" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.372173 4784 scope.go:117] "RemoveContainer" containerID="1a356bfb43910d03a67832ca71365806328527307891aa32a9cfff62e9935091" Jan 23 06:23:53 crc kubenswrapper[4784]: E0123 06:23:53.372795 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a356bfb43910d03a67832ca71365806328527307891aa32a9cfff62e9935091\": container with ID starting with 1a356bfb43910d03a67832ca71365806328527307891aa32a9cfff62e9935091 not found: ID does not exist" containerID="1a356bfb43910d03a67832ca71365806328527307891aa32a9cfff62e9935091" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.372871 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a356bfb43910d03a67832ca71365806328527307891aa32a9cfff62e9935091"} err="failed to get container status \"1a356bfb43910d03a67832ca71365806328527307891aa32a9cfff62e9935091\": rpc error: code = NotFound desc = could not find container \"1a356bfb43910d03a67832ca71365806328527307891aa32a9cfff62e9935091\": container with ID starting with 1a356bfb43910d03a67832ca71365806328527307891aa32a9cfff62e9935091 not found: ID does not exist" Jan 23 06:23:53 crc kubenswrapper[4784]: I0123 06:23:53.512987 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6vbpl" podUID="f2daccf7-5481-4092-a720-045f3e033b62" containerName="registry-server" probeResult="failure" output=< Jan 23 06:23:53 crc kubenswrapper[4784]: timeout: failed to connect service ":50051" within 1s Jan 23 06:23:53 crc kubenswrapper[4784]: > Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.688854 4784 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 06:23:54 crc kubenswrapper[4784]: E0123 06:23:54.689476 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd87290-f925-44f2-b7a6-ec3172726ad6" containerName="extract-utilities" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.689513 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd87290-f925-44f2-b7a6-ec3172726ad6" containerName="extract-utilities" Jan 23 06:23:54 crc kubenswrapper[4784]: E0123 06:23:54.689537 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd87290-f925-44f2-b7a6-ec3172726ad6" containerName="registry-server" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.689554 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd87290-f925-44f2-b7a6-ec3172726ad6" containerName="registry-server" Jan 23 06:23:54 crc kubenswrapper[4784]: E0123 06:23:54.689573 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd87290-f925-44f2-b7a6-ec3172726ad6" containerName="extract-content" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.689591 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd87290-f925-44f2-b7a6-ec3172726ad6" containerName="extract-content" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.689905 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cd87290-f925-44f2-b7a6-ec3172726ad6" containerName="registry-server" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.691855 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.739073 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.739163 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.739204 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.739470 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.739662 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.742920 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.841687 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.841812 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.841863 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.841912 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.841973 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.841922 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.842007 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.842093 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.842166 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:54 crc kubenswrapper[4784]: I0123 06:23:54.842104 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.037493 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:23:55 crc kubenswrapper[4784]: W0123 06:23:55.078973 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-2f443e2ac92134b3431f24d29c97fa8126a8b8916a58c5cbfe2814407f8dfc97 WatchSource:0}: Error finding container 2f443e2ac92134b3431f24d29c97fa8126a8b8916a58c5cbfe2814407f8dfc97: Status 404 returned error can't find the container with id 2f443e2ac92134b3431f24d29c97fa8126a8b8916a58c5cbfe2814407f8dfc97 Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.263613 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cd87290-f925-44f2-b7a6-ec3172726ad6" path="/var/lib/kubelet/pods/8cd87290-f925-44f2-b7a6-ec3172726ad6/volumes" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.320862 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2f443e2ac92134b3431f24d29c97fa8126a8b8916a58c5cbfe2814407f8dfc97"} Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.760146 4784 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.764572 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048" gracePeriod=15 Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.765120 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc" gracePeriod=15 Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.765574 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b" gracePeriod=15 Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.765967 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4" gracePeriod=15 Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.766199 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119" gracePeriod=15 Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.773403 4784 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 06:23:55 crc kubenswrapper[4784]: E0123 06:23:55.774830 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.774854 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 06:23:55 crc kubenswrapper[4784]: E0123 06:23:55.774877 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.775522 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 06:23:55 crc kubenswrapper[4784]: E0123 06:23:55.775543 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.775551 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 06:23:55 crc kubenswrapper[4784]: E0123 06:23:55.775569 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.775577 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 06:23:55 crc kubenswrapper[4784]: E0123 06:23:55.775584 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.775590 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 06:23:55 crc kubenswrapper[4784]: E0123 06:23:55.775605 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.775611 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 06:23:55 crc kubenswrapper[4784]: E0123 06:23:55.775624 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.775631 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.777106 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.777145 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.777165 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.777187 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.777203 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.777222 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.864403 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.865245 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.865353 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.966898 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.966971 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.967063 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.967104 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.967167 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:23:55 crc kubenswrapper[4784]: I0123 06:23:55.967207 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:23:56 crc kubenswrapper[4784]: I0123 06:23:56.331163 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"88f9396318b5957f184e9c3acc505974666769c69dcbf7e3ef30e1798928bb32"} Jan 23 06:23:56 crc kubenswrapper[4784]: I0123 06:23:56.332271 4784 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:56 crc kubenswrapper[4784]: I0123 06:23:56.332870 4784 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:56 crc kubenswrapper[4784]: I0123 06:23:56.335064 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 06:23:56 crc kubenswrapper[4784]: I0123 06:23:56.337105 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 06:23:56 crc kubenswrapper[4784]: I0123 06:23:56.338111 4784 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b" exitCode=0 Jan 23 06:23:56 crc kubenswrapper[4784]: I0123 06:23:56.338144 4784 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc" exitCode=0 Jan 23 06:23:56 crc kubenswrapper[4784]: I0123 06:23:56.338155 4784 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4" exitCode=0 Jan 23 06:23:56 crc kubenswrapper[4784]: I0123 06:23:56.338163 4784 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119" exitCode=2 Jan 23 06:23:56 crc kubenswrapper[4784]: I0123 06:23:56.338284 4784 scope.go:117] "RemoveContainer" containerID="ccf526c6e17f00b7b6ff372d0e8c477adcc17a0b104fbdb7968ef6d044fd6a64" Jan 23 06:23:56 crc kubenswrapper[4784]: I0123 06:23:56.341094 4784 generic.go:334] "Generic (PLEG): container finished" podID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" containerID="97c6a5d983a55bb26c67b194bbd95a34daf6ee7c63be985912f5d7bac214dce0" exitCode=0 Jan 23 06:23:56 crc kubenswrapper[4784]: I0123 06:23:56.341151 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3cba8aab-f67d-4ec7-99c1-6294655ebe56","Type":"ContainerDied","Data":"97c6a5d983a55bb26c67b194bbd95a34daf6ee7c63be985912f5d7bac214dce0"} Jan 23 06:23:56 crc kubenswrapper[4784]: I0123 06:23:56.342091 4784 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:56 crc kubenswrapper[4784]: I0123 06:23:56.342672 4784 status_manager.go:851] "Failed to get status for pod" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:56 crc kubenswrapper[4784]: I0123 06:23:56.344682 4784 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.262082 4784 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.263715 4784 status_manager.go:851] "Failed to get status for pod" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.264486 4784 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.352967 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.677819 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.679598 4784 status_manager.go:851] "Failed to get status for pod" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.680142 4784 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.797693 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3cba8aab-f67d-4ec7-99c1-6294655ebe56-kubelet-dir\") pod \"3cba8aab-f67d-4ec7-99c1-6294655ebe56\" (UID: \"3cba8aab-f67d-4ec7-99c1-6294655ebe56\") " Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.797786 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3cba8aab-f67d-4ec7-99c1-6294655ebe56-kube-api-access\") pod \"3cba8aab-f67d-4ec7-99c1-6294655ebe56\" (UID: \"3cba8aab-f67d-4ec7-99c1-6294655ebe56\") " Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.797849 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cba8aab-f67d-4ec7-99c1-6294655ebe56-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3cba8aab-f67d-4ec7-99c1-6294655ebe56" (UID: "3cba8aab-f67d-4ec7-99c1-6294655ebe56"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.797902 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3cba8aab-f67d-4ec7-99c1-6294655ebe56-var-lock\") pod \"3cba8aab-f67d-4ec7-99c1-6294655ebe56\" (UID: \"3cba8aab-f67d-4ec7-99c1-6294655ebe56\") " Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.798025 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cba8aab-f67d-4ec7-99c1-6294655ebe56-var-lock" (OuterVolumeSpecName: "var-lock") pod "3cba8aab-f67d-4ec7-99c1-6294655ebe56" (UID: "3cba8aab-f67d-4ec7-99c1-6294655ebe56"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.798230 4784 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3cba8aab-f67d-4ec7-99c1-6294655ebe56-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.798245 4784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3cba8aab-f67d-4ec7-99c1-6294655ebe56-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.805995 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cba8aab-f67d-4ec7-99c1-6294655ebe56-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3cba8aab-f67d-4ec7-99c1-6294655ebe56" (UID: "3cba8aab-f67d-4ec7-99c1-6294655ebe56"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:23:57 crc kubenswrapper[4784]: I0123 06:23:57.900493 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3cba8aab-f67d-4ec7-99c1-6294655ebe56-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.369569 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.372553 4784 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048" exitCode=0 Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.376670 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3cba8aab-f67d-4ec7-99c1-6294655ebe56","Type":"ContainerDied","Data":"eb94c2894653c8370cb0d4fe1e35ef9350207e4a0d3efc783ae0a333853d1964"} Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.376732 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb94c2894653c8370cb0d4fe1e35ef9350207e4a0d3efc783ae0a333853d1964" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.376885 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.398694 4784 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.399220 4784 status_manager.go:851] "Failed to get status for pod" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:58 crc kubenswrapper[4784]: E0123 06:23:58.436458 4784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:58 crc kubenswrapper[4784]: E0123 06:23:58.437314 4784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:58 crc kubenswrapper[4784]: E0123 06:23:58.437872 4784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:58 crc kubenswrapper[4784]: E0123 06:23:58.438406 4784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:58 crc kubenswrapper[4784]: E0123 06:23:58.439116 4784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.439220 4784 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 06:23:58 crc kubenswrapper[4784]: E0123 06:23:58.440051 4784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="200ms" Jan 23 06:23:58 crc kubenswrapper[4784]: E0123 06:23:58.642243 4784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="400ms" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.677154 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.679276 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.680227 4784 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.680966 4784 status_manager.go:851] "Failed to get status for pod" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.681442 4784 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.814342 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.814412 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.814535 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.815068 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.815092 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.815155 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.815579 4784 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.815604 4784 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:58 crc kubenswrapper[4784]: I0123 06:23:58.815614 4784 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:23:59 crc kubenswrapper[4784]: E0123 06:23:59.043727 4784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="800ms" Jan 23 06:23:59 crc kubenswrapper[4784]: I0123 06:23:59.260543 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 23 06:23:59 crc kubenswrapper[4784]: I0123 06:23:59.388700 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 06:23:59 crc kubenswrapper[4784]: I0123 06:23:59.390194 4784 scope.go:117] "RemoveContainer" containerID="a214e3b5e0ecb80aaddb6c45bf90a8930a628438e196606e9824d79b4978912b" Jan 23 06:23:59 crc kubenswrapper[4784]: I0123 06:23:59.390462 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:23:59 crc kubenswrapper[4784]: I0123 06:23:59.391584 4784 status_manager.go:851] "Failed to get status for pod" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:59 crc kubenswrapper[4784]: I0123 06:23:59.392004 4784 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:59 crc kubenswrapper[4784]: I0123 06:23:59.392376 4784 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:59 crc kubenswrapper[4784]: I0123 06:23:59.396044 4784 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:59 crc kubenswrapper[4784]: I0123 06:23:59.396391 4784 status_manager.go:851] "Failed to get status for pod" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:59 crc kubenswrapper[4784]: I0123 06:23:59.396871 4784 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:23:59 crc kubenswrapper[4784]: I0123 06:23:59.408670 4784 scope.go:117] "RemoveContainer" containerID="0dee5e04546f1e94218a0ae67bf7f5c484dbe559d4f02010a5b9c7e2b86d75cc" Jan 23 06:23:59 crc kubenswrapper[4784]: I0123 06:23:59.423091 4784 scope.go:117] "RemoveContainer" containerID="e19f86c8193fc4cf98a3bcb2a77df4e1251620cb17e67b530572f2fb45d3d3f4" Jan 23 06:23:59 crc kubenswrapper[4784]: I0123 06:23:59.442488 4784 scope.go:117] "RemoveContainer" containerID="b248dd02c7841d3704228d1a4cba548f78c96891125b3b067ac8b52ef4024119" Jan 23 06:23:59 crc kubenswrapper[4784]: I0123 06:23:59.458695 4784 scope.go:117] "RemoveContainer" containerID="7c313465adb04efef7c9847c66209da4fd41ed7211db61386c825c7271419048" Jan 23 06:23:59 crc kubenswrapper[4784]: I0123 06:23:59.475700 4784 scope.go:117] "RemoveContainer" containerID="b43b171928ed951fa79c71adf364d77ef55e0d657b3e0b9acf0b04d460af9f37" Jan 23 06:23:59 crc kubenswrapper[4784]: E0123 06:23:59.845101 4784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="1.6s" Jan 23 06:24:00 crc kubenswrapper[4784]: E0123 06:24:00.810280 4784 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.217:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d4809dba09abf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Started,Message:Started container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 06:23:55.803114175 +0000 UTC m=+239.035622149,LastTimestamp:2026-01-23 06:23:55.803114175 +0000 UTC m=+239.035622149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 06:24:00 crc kubenswrapper[4784]: E0123 06:24:00.820363 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:23:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:23:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:23:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:23:55Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3295ee1e384bd13d7f93a565d0e83b4cb096da43c673235ced6ac2c39d64dfa1\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:91b55f2f378a9a1fbbda6c2423a0a3bc0c66e0dd45dee584db70782d1b7ba863\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1671873254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:86aa2e9e8c3a1d4fdb701dc4c88eca6a9d0e219a7bd13fb13cb88cb1d0868ba4\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:f24d420ce166977917c7165d0314801df739a06bf165feb72ef8dea197d6fab9\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1203140844},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:2b72e40c5d5b36b681f40c16ebf3dcac6520ed0c79f174ba87f673ab7afd209a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:d83ee77ad07e06451a84205ac4c85c69e912a1c975e1a8a95095d79218028dce\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3d4be67bf38d8d9a00d4eb2fcb7c570ca80966449fc3cd77580f2a690846b80b\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:651d189ed9ab1587cc5ce363825106ed182066b2b48d0d59dc2520425d0d495b\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1175487721},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:00 crc kubenswrapper[4784]: E0123 06:24:00.821373 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:00 crc kubenswrapper[4784]: E0123 06:24:00.821880 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:00 crc kubenswrapper[4784]: E0123 06:24:00.822367 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:00 crc kubenswrapper[4784]: E0123 06:24:00.822918 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:00 crc kubenswrapper[4784]: E0123 06:24:00.822986 4784 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:24:01 crc kubenswrapper[4784]: E0123 06:24:01.446126 4784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="3.2s" Jan 23 06:24:02 crc kubenswrapper[4784]: E0123 06:24:02.401492 4784 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.217:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d4809dba09abf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Started,Message:Started container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 06:23:55.803114175 +0000 UTC m=+239.035622149,LastTimestamp:2026-01-23 06:23:55.803114175 +0000 UTC m=+239.035622149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 06:24:02 crc kubenswrapper[4784]: I0123 06:24:02.505038 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:24:02 crc kubenswrapper[4784]: I0123 06:24:02.506380 4784 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:02 crc kubenswrapper[4784]: I0123 06:24:02.507479 4784 status_manager.go:851] "Failed to get status for pod" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:02 crc kubenswrapper[4784]: I0123 06:24:02.508231 4784 status_manager.go:851] "Failed to get status for pod" podUID="f2daccf7-5481-4092-a720-045f3e033b62" pod="openshift-marketplace/redhat-operators-6vbpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6vbpl\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:02 crc kubenswrapper[4784]: I0123 06:24:02.553927 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:24:02 crc kubenswrapper[4784]: I0123 06:24:02.554972 4784 status_manager.go:851] "Failed to get status for pod" podUID="f2daccf7-5481-4092-a720-045f3e033b62" pod="openshift-marketplace/redhat-operators-6vbpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6vbpl\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:02 crc kubenswrapper[4784]: I0123 06:24:02.555288 4784 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:02 crc kubenswrapper[4784]: I0123 06:24:02.555522 4784 status_manager.go:851] "Failed to get status for pod" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:04 crc kubenswrapper[4784]: E0123 06:24:04.309164 4784 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.217:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" volumeName="registry-storage" Jan 23 06:24:04 crc kubenswrapper[4784]: E0123 06:24:04.647197 4784 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.217:6443: connect: connection refused" interval="6.4s" Jan 23 06:24:07 crc kubenswrapper[4784]: I0123 06:24:07.256778 4784 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:07 crc kubenswrapper[4784]: I0123 06:24:07.257639 4784 status_manager.go:851] "Failed to get status for pod" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:07 crc kubenswrapper[4784]: I0123 06:24:07.257907 4784 status_manager.go:851] "Failed to get status for pod" podUID="f2daccf7-5481-4092-a720-045f3e033b62" pod="openshift-marketplace/redhat-operators-6vbpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6vbpl\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:09 crc kubenswrapper[4784]: I0123 06:24:09.253878 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:24:09 crc kubenswrapper[4784]: I0123 06:24:09.255912 4784 status_manager.go:851] "Failed to get status for pod" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:09 crc kubenswrapper[4784]: I0123 06:24:09.256722 4784 status_manager.go:851] "Failed to get status for pod" podUID="f2daccf7-5481-4092-a720-045f3e033b62" pod="openshift-marketplace/redhat-operators-6vbpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6vbpl\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:09 crc kubenswrapper[4784]: I0123 06:24:09.257479 4784 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:09 crc kubenswrapper[4784]: I0123 06:24:09.283517 4784 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a24ae053-cd33-45bb-964d-8adb9b05239b" Jan 23 06:24:09 crc kubenswrapper[4784]: I0123 06:24:09.283568 4784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a24ae053-cd33-45bb-964d-8adb9b05239b" Jan 23 06:24:09 crc kubenswrapper[4784]: E0123 06:24:09.284350 4784 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:24:09 crc kubenswrapper[4784]: I0123 06:24:09.285254 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:24:09 crc kubenswrapper[4784]: W0123 06:24:09.316103 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-b27caeffebb7a6455d5992628e848c4e55c69c6885200d2c41bf80f6abb9e08c WatchSource:0}: Error finding container b27caeffebb7a6455d5992628e848c4e55c69c6885200d2c41bf80f6abb9e08c: Status 404 returned error can't find the container with id b27caeffebb7a6455d5992628e848c4e55c69c6885200d2c41bf80f6abb9e08c Jan 23 06:24:09 crc kubenswrapper[4784]: I0123 06:24:09.463976 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b27caeffebb7a6455d5992628e848c4e55c69c6885200d2c41bf80f6abb9e08c"} Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.473149 4784 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1924bd26e041e5d0dfde43829f6370c32804001c61d204fed8f9cef736ca4422" exitCode=0 Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.473240 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1924bd26e041e5d0dfde43829f6370c32804001c61d204fed8f9cef736ca4422"} Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.473535 4784 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a24ae053-cd33-45bb-964d-8adb9b05239b" Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.473744 4784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a24ae053-cd33-45bb-964d-8adb9b05239b" Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.474190 4784 status_manager.go:851] "Failed to get status for pod" podUID="f2daccf7-5481-4092-a720-045f3e033b62" pod="openshift-marketplace/redhat-operators-6vbpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6vbpl\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:10 crc kubenswrapper[4784]: E0123 06:24:10.474277 4784 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.474510 4784 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.474767 4784 status_manager.go:851] "Failed to get status for pod" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.477515 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.477566 4784 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c" exitCode=1 Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.477598 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c"} Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.478014 4784 scope.go:117] "RemoveContainer" containerID="5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c" Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.479373 4784 status_manager.go:851] "Failed to get status for pod" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.479835 4784 status_manager.go:851] "Failed to get status for pod" podUID="f2daccf7-5481-4092-a720-045f3e033b62" pod="openshift-marketplace/redhat-operators-6vbpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6vbpl\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.480232 4784 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.480886 4784 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.217:6443: connect: connection refused" Jan 23 06:24:10 crc kubenswrapper[4784]: I0123 06:24:10.675861 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:24:11 crc kubenswrapper[4784]: I0123 06:24:11.496900 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8d0f0366c07087dcc670bb026c28483b9d202b86e0d205bf5b630284201f5fa3"} Jan 23 06:24:11 crc kubenswrapper[4784]: I0123 06:24:11.497267 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"139c6c4fa22562ea3503ac3d8fca124c22621bd3c345e9ed8d34d656f696f7a7"} Jan 23 06:24:11 crc kubenswrapper[4784]: I0123 06:24:11.497280 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3da84157eadda1142a2f9607247ba0c490bad4f461779023c4b760b1e194a0af"} Jan 23 06:24:11 crc kubenswrapper[4784]: I0123 06:24:11.504563 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 06:24:11 crc kubenswrapper[4784]: I0123 06:24:11.504623 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2389be05b151eb26e0644f4d99c1c86fe04639bc4c03e4b8d6b51c7653c9c041"} Jan 23 06:24:12 crc kubenswrapper[4784]: I0123 06:24:12.482541 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:24:12 crc kubenswrapper[4784]: I0123 06:24:12.527544 4784 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a24ae053-cd33-45bb-964d-8adb9b05239b" Jan 23 06:24:12 crc kubenswrapper[4784]: I0123 06:24:12.528244 4784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a24ae053-cd33-45bb-964d-8adb9b05239b" Jan 23 06:24:12 crc kubenswrapper[4784]: I0123 06:24:12.527549 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"aad91630bc17d5a93bdfac6ffabc05f5989d43f269ba6a0089d6096b1c433aaa"} Jan 23 06:24:12 crc kubenswrapper[4784]: I0123 06:24:12.528484 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:24:12 crc kubenswrapper[4784]: I0123 06:24:12.528515 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b4726c180d364b1ae74bf103581c8da93eac1241b3fd2e6bfe86f666dea9950a"} Jan 23 06:24:14 crc kubenswrapper[4784]: I0123 06:24:14.286395 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:24:14 crc kubenswrapper[4784]: I0123 06:24:14.287104 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:24:14 crc kubenswrapper[4784]: I0123 06:24:14.295661 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:24:17 crc kubenswrapper[4784]: I0123 06:24:17.539527 4784 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:24:18 crc kubenswrapper[4784]: I0123 06:24:18.126198 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:24:18 crc kubenswrapper[4784]: I0123 06:24:18.131086 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:24:18 crc kubenswrapper[4784]: I0123 06:24:18.572423 4784 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a24ae053-cd33-45bb-964d-8adb9b05239b" Jan 23 06:24:18 crc kubenswrapper[4784]: I0123 06:24:18.572496 4784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a24ae053-cd33-45bb-964d-8adb9b05239b" Jan 23 06:24:18 crc kubenswrapper[4784]: I0123 06:24:18.581236 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:24:18 crc kubenswrapper[4784]: I0123 06:24:18.586830 4784 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="3c51f97b-3800-4985-a967-3865d74606a0" Jan 23 06:24:19 crc kubenswrapper[4784]: I0123 06:24:19.577716 4784 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a24ae053-cd33-45bb-964d-8adb9b05239b" Jan 23 06:24:19 crc kubenswrapper[4784]: I0123 06:24:19.577784 4784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a24ae053-cd33-45bb-964d-8adb9b05239b" Jan 23 06:24:22 crc kubenswrapper[4784]: I0123 06:24:22.489741 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:24:27 crc kubenswrapper[4784]: I0123 06:24:27.274587 4784 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="3c51f97b-3800-4985-a967-3865d74606a0" Jan 23 06:24:27 crc kubenswrapper[4784]: I0123 06:24:27.400854 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 06:24:27 crc kubenswrapper[4784]: I0123 06:24:27.675394 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 06:24:27 crc kubenswrapper[4784]: I0123 06:24:27.886424 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 06:24:27 crc kubenswrapper[4784]: I0123 06:24:27.942211 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 06:24:27 crc kubenswrapper[4784]: I0123 06:24:27.968928 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 06:24:28 crc kubenswrapper[4784]: I0123 06:24:28.050998 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 06:24:28 crc kubenswrapper[4784]: I0123 06:24:28.234795 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 06:24:28 crc kubenswrapper[4784]: I0123 06:24:28.787277 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 06:24:28 crc kubenswrapper[4784]: I0123 06:24:28.835599 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 06:24:28 crc kubenswrapper[4784]: I0123 06:24:28.920616 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 06:24:28 crc kubenswrapper[4784]: I0123 06:24:28.979640 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 06:24:29 crc kubenswrapper[4784]: I0123 06:24:29.042538 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 06:24:29 crc kubenswrapper[4784]: I0123 06:24:29.125203 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 06:24:29 crc kubenswrapper[4784]: I0123 06:24:29.143352 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 06:24:29 crc kubenswrapper[4784]: I0123 06:24:29.231497 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 06:24:29 crc kubenswrapper[4784]: I0123 06:24:29.243374 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 06:24:29 crc kubenswrapper[4784]: I0123 06:24:29.300303 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 06:24:29 crc kubenswrapper[4784]: I0123 06:24:29.436267 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 06:24:29 crc kubenswrapper[4784]: I0123 06:24:29.688989 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 06:24:29 crc kubenswrapper[4784]: I0123 06:24:29.710687 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 06:24:29 crc kubenswrapper[4784]: I0123 06:24:29.856276 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 06:24:29 crc kubenswrapper[4784]: I0123 06:24:29.982334 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 06:24:30 crc kubenswrapper[4784]: I0123 06:24:30.019630 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 06:24:30 crc kubenswrapper[4784]: I0123 06:24:30.060620 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 06:24:30 crc kubenswrapper[4784]: I0123 06:24:30.078102 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 06:24:30 crc kubenswrapper[4784]: I0123 06:24:30.237343 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 06:24:30 crc kubenswrapper[4784]: I0123 06:24:30.381640 4784 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 06:24:30 crc kubenswrapper[4784]: I0123 06:24:30.404978 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 06:24:30 crc kubenswrapper[4784]: I0123 06:24:30.510696 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 06:24:30 crc kubenswrapper[4784]: I0123 06:24:30.613105 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 06:24:30 crc kubenswrapper[4784]: I0123 06:24:30.642476 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 06:24:30 crc kubenswrapper[4784]: I0123 06:24:30.705354 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 06:24:30 crc kubenswrapper[4784]: I0123 06:24:30.918050 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 06:24:30 crc kubenswrapper[4784]: I0123 06:24:30.931125 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 06:24:30 crc kubenswrapper[4784]: I0123 06:24:30.938514 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 06:24:31 crc kubenswrapper[4784]: I0123 06:24:31.176371 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 06:24:31 crc kubenswrapper[4784]: I0123 06:24:31.218563 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 06:24:31 crc kubenswrapper[4784]: I0123 06:24:31.364488 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 06:24:31 crc kubenswrapper[4784]: I0123 06:24:31.415078 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 06:24:31 crc kubenswrapper[4784]: I0123 06:24:31.434875 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 06:24:31 crc kubenswrapper[4784]: I0123 06:24:31.469298 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 06:24:31 crc kubenswrapper[4784]: I0123 06:24:31.598170 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 06:24:31 crc kubenswrapper[4784]: I0123 06:24:31.643445 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 06:24:31 crc kubenswrapper[4784]: I0123 06:24:31.698813 4784 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 06:24:31 crc kubenswrapper[4784]: I0123 06:24:31.758475 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 06:24:31 crc kubenswrapper[4784]: I0123 06:24:31.777150 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 06:24:31 crc kubenswrapper[4784]: I0123 06:24:31.804493 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 06:24:31 crc kubenswrapper[4784]: I0123 06:24:31.964479 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.040176 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.073592 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.116303 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.256160 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.304362 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.465785 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.483961 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.489827 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.607536 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.628945 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.634305 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.703548 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.782328 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.812524 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.833643 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.856103 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.908630 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.920088 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 06:24:32 crc kubenswrapper[4784]: I0123 06:24:32.978293 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.045219 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.046693 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.048698 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.060135 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.085931 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.130493 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.273898 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.423374 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.538223 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.561837 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.597187 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.617544 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.619571 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.640914 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.693674 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 06:24:33 crc kubenswrapper[4784]: I0123 06:24:33.834858 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.076276 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.083926 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.094701 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.151370 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.152888 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.196386 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.206627 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.225647 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.240992 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.341640 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.422260 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.460133 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.461327 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.495264 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.521964 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.532346 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.601397 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.661812 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.734281 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.760368 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.816522 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.826165 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.930444 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.951616 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.986097 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.988532 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 06:24:34 crc kubenswrapper[4784]: I0123 06:24:34.995392 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.079227 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.101251 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.232071 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.354171 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.445086 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.460893 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.515050 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.571927 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.659582 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.697453 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.727358 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.788830 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.828592 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.834164 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.887341 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.899086 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.932620 4784 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.934474 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.934453387 podStartE2EDuration="41.934453387s" podCreationTimestamp="2026-01-23 06:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:24:17.335025136 +0000 UTC m=+260.567533120" watchObservedRunningTime="2026-01-23 06:24:35.934453387 +0000 UTC m=+279.166961371" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.938815 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.938920 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.939907 4784 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a24ae053-cd33-45bb-964d-8adb9b05239b" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.939931 4784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a24ae053-cd33-45bb-964d-8adb9b05239b" Jan 23 06:24:35 crc kubenswrapper[4784]: I0123 06:24:35.973290 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.973212494 podStartE2EDuration="18.973212494s" podCreationTimestamp="2026-01-23 06:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:24:35.969522359 +0000 UTC m=+279.202030363" watchObservedRunningTime="2026-01-23 06:24:35.973212494 +0000 UTC m=+279.205720518" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.006817 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.041018 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.136623 4784 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.231725 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.576618 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.587396 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.624037 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.627442 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.666365 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.702514 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.705097 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.746680 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.751644 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.803093 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.860394 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.872490 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.945009 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 06:24:36 crc kubenswrapper[4784]: I0123 06:24:36.960696 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.059543 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.070122 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.092121 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.258271 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.303356 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.370335 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.444344 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.497572 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.562652 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.575629 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.607783 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.636824 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.795541 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.826323 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.885738 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.952888 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 06:24:37 crc kubenswrapper[4784]: I0123 06:24:37.960415 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.045380 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.093679 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.096944 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.108243 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.134288 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.139869 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.277186 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.331354 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.389962 4784 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.520709 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.526427 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.603845 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.692966 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.778822 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.846669 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.860501 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.869317 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.871718 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.881957 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.905493 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.937315 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 06:24:38 crc kubenswrapper[4784]: I0123 06:24:38.990027 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 06:24:39 crc kubenswrapper[4784]: I0123 06:24:39.006510 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 06:24:39 crc kubenswrapper[4784]: I0123 06:24:39.080824 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 06:24:39 crc kubenswrapper[4784]: I0123 06:24:39.142835 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 06:24:39 crc kubenswrapper[4784]: I0123 06:24:39.155539 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 06:24:39 crc kubenswrapper[4784]: I0123 06:24:39.216357 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 06:24:39 crc kubenswrapper[4784]: I0123 06:24:39.271645 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 06:24:39 crc kubenswrapper[4784]: I0123 06:24:39.279518 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 06:24:39 crc kubenswrapper[4784]: I0123 06:24:39.481214 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 06:24:39 crc kubenswrapper[4784]: I0123 06:24:39.550473 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 06:24:39 crc kubenswrapper[4784]: I0123 06:24:39.643879 4784 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 06:24:39 crc kubenswrapper[4784]: I0123 06:24:39.680588 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 06:24:39 crc kubenswrapper[4784]: I0123 06:24:39.691119 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 06:24:39 crc kubenswrapper[4784]: I0123 06:24:39.720383 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 06:24:39 crc kubenswrapper[4784]: I0123 06:24:39.873349 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 06:24:39 crc kubenswrapper[4784]: I0123 06:24:39.909929 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.035437 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.043467 4784 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.044114 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://88f9396318b5957f184e9c3acc505974666769c69dcbf7e3ef30e1798928bb32" gracePeriod=5 Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.076316 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.224211 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.260021 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.342421 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.377149 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.397400 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.427198 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.436781 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.520697 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.531734 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.578444 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.584678 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.623841 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.678560 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.693093 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.720432 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.742030 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.874263 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.883808 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.924352 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.929987 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.957089 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 06:24:40 crc kubenswrapper[4784]: I0123 06:24:40.984906 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 06:24:41 crc kubenswrapper[4784]: I0123 06:24:41.045033 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 06:24:41 crc kubenswrapper[4784]: I0123 06:24:41.167682 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 06:24:41 crc kubenswrapper[4784]: I0123 06:24:41.245347 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 06:24:41 crc kubenswrapper[4784]: I0123 06:24:41.301005 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 06:24:41 crc kubenswrapper[4784]: I0123 06:24:41.307470 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 06:24:41 crc kubenswrapper[4784]: I0123 06:24:41.349801 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 06:24:41 crc kubenswrapper[4784]: I0123 06:24:41.389083 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 06:24:41 crc kubenswrapper[4784]: I0123 06:24:41.488082 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 06:24:41 crc kubenswrapper[4784]: I0123 06:24:41.498806 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 06:24:41 crc kubenswrapper[4784]: I0123 06:24:41.626404 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 06:24:41 crc kubenswrapper[4784]: I0123 06:24:41.656051 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 06:24:41 crc kubenswrapper[4784]: I0123 06:24:41.802476 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 06:24:41 crc kubenswrapper[4784]: I0123 06:24:41.939672 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 06:24:42 crc kubenswrapper[4784]: I0123 06:24:42.086235 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 06:24:42 crc kubenswrapper[4784]: I0123 06:24:42.090614 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 06:24:42 crc kubenswrapper[4784]: I0123 06:24:42.227719 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 06:24:42 crc kubenswrapper[4784]: I0123 06:24:42.276561 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 06:24:42 crc kubenswrapper[4784]: I0123 06:24:42.362123 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 06:24:42 crc kubenswrapper[4784]: I0123 06:24:42.362440 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 06:24:42 crc kubenswrapper[4784]: I0123 06:24:42.504065 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 06:24:42 crc kubenswrapper[4784]: I0123 06:24:42.561325 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 06:24:42 crc kubenswrapper[4784]: I0123 06:24:42.569443 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 06:24:42 crc kubenswrapper[4784]: I0123 06:24:42.600102 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 06:24:42 crc kubenswrapper[4784]: I0123 06:24:42.724818 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 06:24:42 crc kubenswrapper[4784]: I0123 06:24:42.837446 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 06:24:43 crc kubenswrapper[4784]: I0123 06:24:43.032216 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 06:24:44 crc kubenswrapper[4784]: I0123 06:24:44.288533 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 06:24:44 crc kubenswrapper[4784]: I0123 06:24:44.337041 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 06:24:44 crc kubenswrapper[4784]: I0123 06:24:44.816678 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.179684 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.179788 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.259519 4784 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.277569 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.277620 4784 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d9640a61-881a-4078-846f-60f69ce83ce1" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.285327 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.285460 4784 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d9640a61-881a-4078-846f-60f69ce83ce1" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.386827 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.386942 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.387037 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.387077 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.387116 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.387144 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.387147 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.387203 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.387331 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.387494 4784 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.387516 4784 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.387534 4784 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.387559 4784 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.404413 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.489825 4784 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.771896 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.772199 4784 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="88f9396318b5957f184e9c3acc505974666769c69dcbf7e3ef30e1798928bb32" exitCode=137 Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.772281 4784 scope.go:117] "RemoveContainer" containerID="88f9396318b5957f184e9c3acc505974666769c69dcbf7e3ef30e1798928bb32" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.772464 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.798289 4784 scope.go:117] "RemoveContainer" containerID="88f9396318b5957f184e9c3acc505974666769c69dcbf7e3ef30e1798928bb32" Jan 23 06:24:45 crc kubenswrapper[4784]: E0123 06:24:45.799448 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88f9396318b5957f184e9c3acc505974666769c69dcbf7e3ef30e1798928bb32\": container with ID starting with 88f9396318b5957f184e9c3acc505974666769c69dcbf7e3ef30e1798928bb32 not found: ID does not exist" containerID="88f9396318b5957f184e9c3acc505974666769c69dcbf7e3ef30e1798928bb32" Jan 23 06:24:45 crc kubenswrapper[4784]: I0123 06:24:45.799522 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f9396318b5957f184e9c3acc505974666769c69dcbf7e3ef30e1798928bb32"} err="failed to get container status \"88f9396318b5957f184e9c3acc505974666769c69dcbf7e3ef30e1798928bb32\": rpc error: code = NotFound desc = could not find container \"88f9396318b5957f184e9c3acc505974666769c69dcbf7e3ef30e1798928bb32\": container with ID starting with 88f9396318b5957f184e9c3acc505974666769c69dcbf7e3ef30e1798928bb32 not found: ID does not exist" Jan 23 06:24:47 crc kubenswrapper[4784]: I0123 06:24:47.267292 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 23 06:24:57 crc kubenswrapper[4784]: I0123 06:24:57.060444 4784 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 23 06:24:57 crc kubenswrapper[4784]: I0123 06:24:57.881157 4784 generic.go:334] "Generic (PLEG): container finished" podID="dc93f303-432c-4487-a225-f0af2fa5bd49" containerID="d4600f59bb969bb390239da2a85643bf146a362b38c67f7da24229e4ef52f2bf" exitCode=0 Jan 23 06:24:57 crc kubenswrapper[4784]: I0123 06:24:57.881240 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" event={"ID":"dc93f303-432c-4487-a225-f0af2fa5bd49","Type":"ContainerDied","Data":"d4600f59bb969bb390239da2a85643bf146a362b38c67f7da24229e4ef52f2bf"} Jan 23 06:24:57 crc kubenswrapper[4784]: I0123 06:24:57.882131 4784 scope.go:117] "RemoveContainer" containerID="d4600f59bb969bb390239da2a85643bf146a362b38c67f7da24229e4ef52f2bf" Jan 23 06:24:58 crc kubenswrapper[4784]: I0123 06:24:58.890167 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" event={"ID":"dc93f303-432c-4487-a225-f0af2fa5bd49","Type":"ContainerStarted","Data":"f82b81c982a4c6782b144641d93252d830ab34b267f9dbefa3dc687bca3bf511"} Jan 23 06:24:58 crc kubenswrapper[4784]: I0123 06:24:58.892048 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:24:58 crc kubenswrapper[4784]: I0123 06:24:58.895559 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:25:08 crc kubenswrapper[4784]: I0123 06:25:08.742974 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4r4ds"] Jan 23 06:25:08 crc kubenswrapper[4784]: I0123 06:25:08.744137 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" podUID="85a2a44a-7e65-45f7-bd20-b895f5f09c73" containerName="controller-manager" containerID="cri-o://0d191427d4f026ed44e868b055175c7e1dca095073c31a67ca69eb3f2e1398db" gracePeriod=30 Jan 23 06:25:08 crc kubenswrapper[4784]: I0123 06:25:08.842086 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7"] Jan 23 06:25:08 crc kubenswrapper[4784]: I0123 06:25:08.842363 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" podUID="709308c5-9977-4e05-98f0-b745c298db67" containerName="route-controller-manager" containerID="cri-o://59750aabf85c59d3109ec4c138f837f0786a9b9d1c487300eba237238dab03f5" gracePeriod=30 Jan 23 06:25:08 crc kubenswrapper[4784]: I0123 06:25:08.964825 4784 generic.go:334] "Generic (PLEG): container finished" podID="85a2a44a-7e65-45f7-bd20-b895f5f09c73" containerID="0d191427d4f026ed44e868b055175c7e1dca095073c31a67ca69eb3f2e1398db" exitCode=0 Jan 23 06:25:08 crc kubenswrapper[4784]: I0123 06:25:08.964884 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" event={"ID":"85a2a44a-7e65-45f7-bd20-b895f5f09c73","Type":"ContainerDied","Data":"0d191427d4f026ed44e868b055175c7e1dca095073c31a67ca69eb3f2e1398db"} Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.149769 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.201640 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.261620 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-proxy-ca-bundles\") pod \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.261773 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-config\") pod \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.261814 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a2a44a-7e65-45f7-bd20-b895f5f09c73-serving-cert\") pod \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.261897 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpjn9\" (UniqueName: \"kubernetes.io/projected/85a2a44a-7e65-45f7-bd20-b895f5f09c73-kube-api-access-hpjn9\") pod \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.261958 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-client-ca\") pod \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\" (UID: \"85a2a44a-7e65-45f7-bd20-b895f5f09c73\") " Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.262921 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-client-ca" (OuterVolumeSpecName: "client-ca") pod "85a2a44a-7e65-45f7-bd20-b895f5f09c73" (UID: "85a2a44a-7e65-45f7-bd20-b895f5f09c73"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.263043 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "85a2a44a-7e65-45f7-bd20-b895f5f09c73" (UID: "85a2a44a-7e65-45f7-bd20-b895f5f09c73"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.263071 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-config" (OuterVolumeSpecName: "config") pod "85a2a44a-7e65-45f7-bd20-b895f5f09c73" (UID: "85a2a44a-7e65-45f7-bd20-b895f5f09c73"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.270050 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a2a44a-7e65-45f7-bd20-b895f5f09c73-kube-api-access-hpjn9" (OuterVolumeSpecName: "kube-api-access-hpjn9") pod "85a2a44a-7e65-45f7-bd20-b895f5f09c73" (UID: "85a2a44a-7e65-45f7-bd20-b895f5f09c73"). InnerVolumeSpecName "kube-api-access-hpjn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.276456 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a2a44a-7e65-45f7-bd20-b895f5f09c73-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "85a2a44a-7e65-45f7-bd20-b895f5f09c73" (UID: "85a2a44a-7e65-45f7-bd20-b895f5f09c73"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.364049 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709308c5-9977-4e05-98f0-b745c298db67-config\") pod \"709308c5-9977-4e05-98f0-b745c298db67\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.364184 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/709308c5-9977-4e05-98f0-b745c298db67-client-ca\") pod \"709308c5-9977-4e05-98f0-b745c298db67\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.364238 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsm4w\" (UniqueName: \"kubernetes.io/projected/709308c5-9977-4e05-98f0-b745c298db67-kube-api-access-bsm4w\") pod \"709308c5-9977-4e05-98f0-b745c298db67\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.364349 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/709308c5-9977-4e05-98f0-b745c298db67-serving-cert\") pod \"709308c5-9977-4e05-98f0-b745c298db67\" (UID: \"709308c5-9977-4e05-98f0-b745c298db67\") " Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.365650 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.365689 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85a2a44a-7e65-45f7-bd20-b895f5f09c73-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.365705 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpjn9\" (UniqueName: \"kubernetes.io/projected/85a2a44a-7e65-45f7-bd20-b895f5f09c73-kube-api-access-hpjn9\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.365731 4784 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.365866 4784 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85a2a44a-7e65-45f7-bd20-b895f5f09c73-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.365943 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709308c5-9977-4e05-98f0-b745c298db67-client-ca" (OuterVolumeSpecName: "client-ca") pod "709308c5-9977-4e05-98f0-b745c298db67" (UID: "709308c5-9977-4e05-98f0-b745c298db67"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.367476 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709308c5-9977-4e05-98f0-b745c298db67-config" (OuterVolumeSpecName: "config") pod "709308c5-9977-4e05-98f0-b745c298db67" (UID: "709308c5-9977-4e05-98f0-b745c298db67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.370827 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/709308c5-9977-4e05-98f0-b745c298db67-kube-api-access-bsm4w" (OuterVolumeSpecName: "kube-api-access-bsm4w") pod "709308c5-9977-4e05-98f0-b745c298db67" (UID: "709308c5-9977-4e05-98f0-b745c298db67"). InnerVolumeSpecName "kube-api-access-bsm4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.371039 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709308c5-9977-4e05-98f0-b745c298db67-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "709308c5-9977-4e05-98f0-b745c298db67" (UID: "709308c5-9977-4e05-98f0-b745c298db67"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.469204 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/709308c5-9977-4e05-98f0-b745c298db67-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.469261 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709308c5-9977-4e05-98f0-b745c298db67-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.469272 4784 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/709308c5-9977-4e05-98f0-b745c298db67-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.469553 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsm4w\" (UniqueName: \"kubernetes.io/projected/709308c5-9977-4e05-98f0-b745c298db67-kube-api-access-bsm4w\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.973476 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" event={"ID":"85a2a44a-7e65-45f7-bd20-b895f5f09c73","Type":"ContainerDied","Data":"6967e30ae4b73d012b5c03791ac4d2b38eea8aafc673861ccf0ac4bc073a76a3"} Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.973541 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4r4ds" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.973574 4784 scope.go:117] "RemoveContainer" containerID="0d191427d4f026ed44e868b055175c7e1dca095073c31a67ca69eb3f2e1398db" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.976054 4784 generic.go:334] "Generic (PLEG): container finished" podID="709308c5-9977-4e05-98f0-b745c298db67" containerID="59750aabf85c59d3109ec4c138f837f0786a9b9d1c487300eba237238dab03f5" exitCode=0 Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.976089 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" event={"ID":"709308c5-9977-4e05-98f0-b745c298db67","Type":"ContainerDied","Data":"59750aabf85c59d3109ec4c138f837f0786a9b9d1c487300eba237238dab03f5"} Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.976114 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" event={"ID":"709308c5-9977-4e05-98f0-b745c298db67","Type":"ContainerDied","Data":"1efaa9123218c444f1f183eafbced58c2d4ccfc2ba082c7d5c5448a3668e9683"} Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.976283 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7" Jan 23 06:25:09 crc kubenswrapper[4784]: I0123 06:25:09.994847 4784 scope.go:117] "RemoveContainer" containerID="59750aabf85c59d3109ec4c138f837f0786a9b9d1c487300eba237238dab03f5" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.011321 4784 scope.go:117] "RemoveContainer" containerID="59750aabf85c59d3109ec4c138f837f0786a9b9d1c487300eba237238dab03f5" Jan 23 06:25:10 crc kubenswrapper[4784]: E0123 06:25:10.012165 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59750aabf85c59d3109ec4c138f837f0786a9b9d1c487300eba237238dab03f5\": container with ID starting with 59750aabf85c59d3109ec4c138f837f0786a9b9d1c487300eba237238dab03f5 not found: ID does not exist" containerID="59750aabf85c59d3109ec4c138f837f0786a9b9d1c487300eba237238dab03f5" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.012249 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59750aabf85c59d3109ec4c138f837f0786a9b9d1c487300eba237238dab03f5"} err="failed to get container status \"59750aabf85c59d3109ec4c138f837f0786a9b9d1c487300eba237238dab03f5\": rpc error: code = NotFound desc = could not find container \"59750aabf85c59d3109ec4c138f837f0786a9b9d1c487300eba237238dab03f5\": container with ID starting with 59750aabf85c59d3109ec4c138f837f0786a9b9d1c487300eba237238dab03f5 not found: ID does not exist" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.015363 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4r4ds"] Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.018786 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4r4ds"] Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.026680 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7"] Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.029597 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h2hn7"] Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.249074 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-hgx97"] Jan 23 06:25:10 crc kubenswrapper[4784]: E0123 06:25:10.249945 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" containerName="installer" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.249966 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" containerName="installer" Jan 23 06:25:10 crc kubenswrapper[4784]: E0123 06:25:10.249983 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709308c5-9977-4e05-98f0-b745c298db67" containerName="route-controller-manager" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.249992 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="709308c5-9977-4e05-98f0-b745c298db67" containerName="route-controller-manager" Jan 23 06:25:10 crc kubenswrapper[4784]: E0123 06:25:10.250009 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.250018 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 06:25:10 crc kubenswrapper[4784]: E0123 06:25:10.250030 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a2a44a-7e65-45f7-bd20-b895f5f09c73" containerName="controller-manager" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.250039 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a2a44a-7e65-45f7-bd20-b895f5f09c73" containerName="controller-manager" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.250197 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="709308c5-9977-4e05-98f0-b745c298db67" containerName="route-controller-manager" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.250210 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cba8aab-f67d-4ec7-99c1-6294655ebe56" containerName="installer" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.250223 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.250238 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="85a2a44a-7e65-45f7-bd20-b895f5f09c73" containerName="controller-manager" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.250910 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.254338 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.254446 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf"] Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.255611 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.259789 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.260026 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.260385 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.260420 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.260958 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.261021 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.261038 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.261126 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.261403 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.261848 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.262127 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.275406 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf"] Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.279157 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.344148 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-hgx97"] Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.381539 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-client-ca\") pod \"controller-manager-678789d75d-hgx97\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.381599 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-serving-cert\") pod \"controller-manager-678789d75d-hgx97\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.381631 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prttn\" (UniqueName: \"kubernetes.io/projected/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-kube-api-access-prttn\") pod \"controller-manager-678789d75d-hgx97\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.381668 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-config\") pod \"controller-manager-678789d75d-hgx97\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.381688 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-client-ca\") pod \"route-controller-manager-79bd468846-pxjmf\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.381917 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-config\") pod \"route-controller-manager-79bd468846-pxjmf\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.382115 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtfdg\" (UniqueName: \"kubernetes.io/projected/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-kube-api-access-mtfdg\") pod \"route-controller-manager-79bd468846-pxjmf\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.382160 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-proxy-ca-bundles\") pod \"controller-manager-678789d75d-hgx97\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.382229 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-serving-cert\") pod \"route-controller-manager-79bd468846-pxjmf\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.485845 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prttn\" (UniqueName: \"kubernetes.io/projected/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-kube-api-access-prttn\") pod \"controller-manager-678789d75d-hgx97\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.485935 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-config\") pod \"controller-manager-678789d75d-hgx97\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.485970 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-client-ca\") pod \"route-controller-manager-79bd468846-pxjmf\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.486015 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-config\") pod \"route-controller-manager-79bd468846-pxjmf\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.486063 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtfdg\" (UniqueName: \"kubernetes.io/projected/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-kube-api-access-mtfdg\") pod \"route-controller-manager-79bd468846-pxjmf\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.486089 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-proxy-ca-bundles\") pod \"controller-manager-678789d75d-hgx97\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.486115 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-serving-cert\") pod \"route-controller-manager-79bd468846-pxjmf\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.486183 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-client-ca\") pod \"controller-manager-678789d75d-hgx97\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.486398 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-serving-cert\") pod \"controller-manager-678789d75d-hgx97\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.487807 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-client-ca\") pod \"controller-manager-678789d75d-hgx97\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.488112 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-config\") pod \"route-controller-manager-79bd468846-pxjmf\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.488193 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-config\") pod \"controller-manager-678789d75d-hgx97\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.488424 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-client-ca\") pod \"route-controller-manager-79bd468846-pxjmf\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.488429 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-proxy-ca-bundles\") pod \"controller-manager-678789d75d-hgx97\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.491026 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-serving-cert\") pod \"route-controller-manager-79bd468846-pxjmf\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.491112 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-serving-cert\") pod \"controller-manager-678789d75d-hgx97\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.506719 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prttn\" (UniqueName: \"kubernetes.io/projected/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-kube-api-access-prttn\") pod \"controller-manager-678789d75d-hgx97\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.506799 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtfdg\" (UniqueName: \"kubernetes.io/projected/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-kube-api-access-mtfdg\") pod \"route-controller-manager-79bd468846-pxjmf\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.584892 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.594308 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.793056 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-hgx97"] Jan 23 06:25:10 crc kubenswrapper[4784]: W0123 06:25:10.801033 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc39fe75_4af8_4b2c_9d6b_fd4d7e8c6473.slice/crio-22ccf0c7679e3eca09509b79ad7d86e4aaa9e6da6ec7a23eb0d2cc419231cc59 WatchSource:0}: Error finding container 22ccf0c7679e3eca09509b79ad7d86e4aaa9e6da6ec7a23eb0d2cc419231cc59: Status 404 returned error can't find the container with id 22ccf0c7679e3eca09509b79ad7d86e4aaa9e6da6ec7a23eb0d2cc419231cc59 Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.850661 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf"] Jan 23 06:25:10 crc kubenswrapper[4784]: W0123 06:25:10.853183 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96f8ad08_a7e6_4ae5_ac33_e8ca51f1fbb8.slice/crio-5c027bb9894afe1573aa7b1f727f054523bf0f38a939ce61af9f09ad63f8c0f3 WatchSource:0}: Error finding container 5c027bb9894afe1573aa7b1f727f054523bf0f38a939ce61af9f09ad63f8c0f3: Status 404 returned error can't find the container with id 5c027bb9894afe1573aa7b1f727f054523bf0f38a939ce61af9f09ad63f8c0f3 Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.988777 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" event={"ID":"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473","Type":"ContainerStarted","Data":"22ccf0c7679e3eca09509b79ad7d86e4aaa9e6da6ec7a23eb0d2cc419231cc59"} Jan 23 06:25:10 crc kubenswrapper[4784]: I0123 06:25:10.989778 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" event={"ID":"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8","Type":"ContainerStarted","Data":"5c027bb9894afe1573aa7b1f727f054523bf0f38a939ce61af9f09ad63f8c0f3"} Jan 23 06:25:11 crc kubenswrapper[4784]: I0123 06:25:11.263710 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="709308c5-9977-4e05-98f0-b745c298db67" path="/var/lib/kubelet/pods/709308c5-9977-4e05-98f0-b745c298db67/volumes" Jan 23 06:25:11 crc kubenswrapper[4784]: I0123 06:25:11.264522 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85a2a44a-7e65-45f7-bd20-b895f5f09c73" path="/var/lib/kubelet/pods/85a2a44a-7e65-45f7-bd20-b895f5f09c73/volumes" Jan 23 06:25:11 crc kubenswrapper[4784]: I0123 06:25:11.891621 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 06:25:12 crc kubenswrapper[4784]: I0123 06:25:12.005230 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" event={"ID":"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473","Type":"ContainerStarted","Data":"0f65124436a2ca0ce0de232f98569165fa85ec30a65e9bd7a9652527b64191b9"} Jan 23 06:25:12 crc kubenswrapper[4784]: I0123 06:25:12.005523 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:12 crc kubenswrapper[4784]: I0123 06:25:12.007647 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" event={"ID":"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8","Type":"ContainerStarted","Data":"0f574f18719b3e483f1f7508143d5d1a9741f90bebcb50468b90c9268f80e47b"} Jan 23 06:25:12 crc kubenswrapper[4784]: I0123 06:25:12.007840 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:12 crc kubenswrapper[4784]: I0123 06:25:12.012555 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:12 crc kubenswrapper[4784]: I0123 06:25:12.020135 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:12 crc kubenswrapper[4784]: I0123 06:25:12.036397 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" podStartSLOduration=4.036368799 podStartE2EDuration="4.036368799s" podCreationTimestamp="2026-01-23 06:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:25:12.030830387 +0000 UTC m=+315.263338381" watchObservedRunningTime="2026-01-23 06:25:12.036368799 +0000 UTC m=+315.268876773" Jan 23 06:25:12 crc kubenswrapper[4784]: I0123 06:25:12.089443 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" podStartSLOduration=4.089406663 podStartE2EDuration="4.089406663s" podCreationTimestamp="2026-01-23 06:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:25:12.075062394 +0000 UTC m=+315.307570368" watchObservedRunningTime="2026-01-23 06:25:12.089406663 +0000 UTC m=+315.321914637" Jan 23 06:25:20 crc kubenswrapper[4784]: I0123 06:25:20.887723 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-hgx97"] Jan 23 06:25:20 crc kubenswrapper[4784]: I0123 06:25:20.890131 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" podUID="cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473" containerName="controller-manager" containerID="cri-o://0f65124436a2ca0ce0de232f98569165fa85ec30a65e9bd7a9652527b64191b9" gracePeriod=30 Jan 23 06:25:20 crc kubenswrapper[4784]: I0123 06:25:20.906265 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf"] Jan 23 06:25:20 crc kubenswrapper[4784]: I0123 06:25:20.906563 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" podUID="96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8" containerName="route-controller-manager" containerID="cri-o://0f574f18719b3e483f1f7508143d5d1a9741f90bebcb50468b90c9268f80e47b" gracePeriod=30 Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.071122 4784 generic.go:334] "Generic (PLEG): container finished" podID="cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473" containerID="0f65124436a2ca0ce0de232f98569165fa85ec30a65e9bd7a9652527b64191b9" exitCode=0 Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.071231 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" event={"ID":"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473","Type":"ContainerDied","Data":"0f65124436a2ca0ce0de232f98569165fa85ec30a65e9bd7a9652527b64191b9"} Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.073091 4784 generic.go:334] "Generic (PLEG): container finished" podID="96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8" containerID="0f574f18719b3e483f1f7508143d5d1a9741f90bebcb50468b90c9268f80e47b" exitCode=0 Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.073146 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" event={"ID":"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8","Type":"ContainerDied","Data":"0f574f18719b3e483f1f7508143d5d1a9741f90bebcb50468b90c9268f80e47b"} Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.414803 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.539626 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.558376 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-serving-cert\") pod \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.558438 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prttn\" (UniqueName: \"kubernetes.io/projected/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-kube-api-access-prttn\") pod \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.558459 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-proxy-ca-bundles\") pod \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.558484 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-serving-cert\") pod \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.558513 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-config\") pod \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.558537 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-config\") pod \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.558556 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtfdg\" (UniqueName: \"kubernetes.io/projected/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-kube-api-access-mtfdg\") pod \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.558591 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-client-ca\") pod \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\" (UID: \"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473\") " Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.558611 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-client-ca\") pod \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\" (UID: \"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8\") " Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.559564 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-client-ca" (OuterVolumeSpecName: "client-ca") pod "96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8" (UID: "96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.560053 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-config" (OuterVolumeSpecName: "config") pod "96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8" (UID: "96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.561412 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473" (UID: "cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.561506 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-config" (OuterVolumeSpecName: "config") pod "cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473" (UID: "cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.561437 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-client-ca" (OuterVolumeSpecName: "client-ca") pod "cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473" (UID: "cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.568739 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8" (UID: "96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.568735 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-kube-api-access-prttn" (OuterVolumeSpecName: "kube-api-access-prttn") pod "cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473" (UID: "cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473"). InnerVolumeSpecName "kube-api-access-prttn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.568865 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-kube-api-access-mtfdg" (OuterVolumeSpecName: "kube-api-access-mtfdg") pod "96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8" (UID: "96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8"). InnerVolumeSpecName "kube-api-access-mtfdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.569060 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473" (UID: "cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.659928 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.659976 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prttn\" (UniqueName: \"kubernetes.io/projected/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-kube-api-access-prttn\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.659992 4784 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.660005 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.660019 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.660030 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.660042 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtfdg\" (UniqueName: \"kubernetes.io/projected/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-kube-api-access-mtfdg\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.660053 4784 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:21 crc kubenswrapper[4784]: I0123 06:25:21.660065 4784 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.083345 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" event={"ID":"cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473","Type":"ContainerDied","Data":"22ccf0c7679e3eca09509b79ad7d86e4aaa9e6da6ec7a23eb0d2cc419231cc59"} Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.084442 4784 scope.go:117] "RemoveContainer" containerID="0f65124436a2ca0ce0de232f98569165fa85ec30a65e9bd7a9652527b64191b9" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.083439 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678789d75d-hgx97" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.085852 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" event={"ID":"96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8","Type":"ContainerDied","Data":"5c027bb9894afe1573aa7b1f727f054523bf0f38a939ce61af9f09ad63f8c0f3"} Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.085939 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.120295 4784 scope.go:117] "RemoveContainer" containerID="0f574f18719b3e483f1f7508143d5d1a9741f90bebcb50468b90c9268f80e47b" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.138226 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-hgx97"] Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.144622 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q"] Jan 23 06:25:22 crc kubenswrapper[4784]: E0123 06:25:22.145050 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8" containerName="route-controller-manager" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.145080 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8" containerName="route-controller-manager" Jan 23 06:25:22 crc kubenswrapper[4784]: E0123 06:25:22.145091 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473" containerName="controller-manager" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.145101 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473" containerName="controller-manager" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.145224 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473" containerName="controller-manager" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.145247 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8" containerName="route-controller-manager" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.145847 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.148177 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.148508 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.148669 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.149676 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.149876 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.150113 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.154925 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-hgx97"] Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.159947 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-s84wx"] Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.160855 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.164470 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q"] Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.165260 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.165531 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.165725 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.166251 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.166428 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.167700 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.171232 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf"] Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.177355 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.178905 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-pxjmf"] Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.181784 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-s84wx"] Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.270207 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-client-ca\") pod \"controller-manager-7775888cf-s84wx\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.270839 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-proxy-ca-bundles\") pod \"controller-manager-7775888cf-s84wx\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.271212 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52e9e151-5768-4b34-9379-d9f15e2dbebc-config\") pod \"route-controller-manager-7785b8bc59-xq89q\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.271399 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52e9e151-5768-4b34-9379-d9f15e2dbebc-serving-cert\") pod \"route-controller-manager-7785b8bc59-xq89q\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.271482 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-config\") pod \"controller-manager-7775888cf-s84wx\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.271515 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52e9e151-5768-4b34-9379-d9f15e2dbebc-client-ca\") pod \"route-controller-manager-7785b8bc59-xq89q\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.271574 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6fbz\" (UniqueName: \"kubernetes.io/projected/1e62733e-611d-412c-aa0d-2b3b040fa621-kube-api-access-k6fbz\") pod \"controller-manager-7775888cf-s84wx\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.271630 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e62733e-611d-412c-aa0d-2b3b040fa621-serving-cert\") pod \"controller-manager-7775888cf-s84wx\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.271681 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxnhv\" (UniqueName: \"kubernetes.io/projected/52e9e151-5768-4b34-9379-d9f15e2dbebc-kube-api-access-hxnhv\") pod \"route-controller-manager-7785b8bc59-xq89q\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.373352 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52e9e151-5768-4b34-9379-d9f15e2dbebc-client-ca\") pod \"route-controller-manager-7785b8bc59-xq89q\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.373436 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6fbz\" (UniqueName: \"kubernetes.io/projected/1e62733e-611d-412c-aa0d-2b3b040fa621-kube-api-access-k6fbz\") pod \"controller-manager-7775888cf-s84wx\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.373467 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e62733e-611d-412c-aa0d-2b3b040fa621-serving-cert\") pod \"controller-manager-7775888cf-s84wx\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.373508 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxnhv\" (UniqueName: \"kubernetes.io/projected/52e9e151-5768-4b34-9379-d9f15e2dbebc-kube-api-access-hxnhv\") pod \"route-controller-manager-7785b8bc59-xq89q\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.373549 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-client-ca\") pod \"controller-manager-7775888cf-s84wx\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.373590 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-proxy-ca-bundles\") pod \"controller-manager-7775888cf-s84wx\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.373642 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52e9e151-5768-4b34-9379-d9f15e2dbebc-config\") pod \"route-controller-manager-7785b8bc59-xq89q\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.373676 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52e9e151-5768-4b34-9379-d9f15e2dbebc-serving-cert\") pod \"route-controller-manager-7785b8bc59-xq89q\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.373708 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-config\") pod \"controller-manager-7775888cf-s84wx\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.375261 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-client-ca\") pod \"controller-manager-7775888cf-s84wx\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.375321 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-proxy-ca-bundles\") pod \"controller-manager-7775888cf-s84wx\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.375648 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-config\") pod \"controller-manager-7775888cf-s84wx\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.375747 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52e9e151-5768-4b34-9379-d9f15e2dbebc-config\") pod \"route-controller-manager-7785b8bc59-xq89q\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.376726 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52e9e151-5768-4b34-9379-d9f15e2dbebc-client-ca\") pod \"route-controller-manager-7785b8bc59-xq89q\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.380610 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e62733e-611d-412c-aa0d-2b3b040fa621-serving-cert\") pod \"controller-manager-7775888cf-s84wx\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.385443 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52e9e151-5768-4b34-9379-d9f15e2dbebc-serving-cert\") pod \"route-controller-manager-7785b8bc59-xq89q\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.393316 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxnhv\" (UniqueName: \"kubernetes.io/projected/52e9e151-5768-4b34-9379-d9f15e2dbebc-kube-api-access-hxnhv\") pod \"route-controller-manager-7785b8bc59-xq89q\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.395940 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6fbz\" (UniqueName: \"kubernetes.io/projected/1e62733e-611d-412c-aa0d-2b3b040fa621-kube-api-access-k6fbz\") pod \"controller-manager-7775888cf-s84wx\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.470700 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.490580 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.692218 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zkrz6"] Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.692607 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zkrz6" podUID="de559431-551a-4057-96ec-37537d6eddc8" containerName="registry-server" containerID="cri-o://9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff" gracePeriod=2 Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.728788 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-s84wx"] Jan 23 06:25:22 crc kubenswrapper[4784]: I0123 06:25:22.776900 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q"] Jan 23 06:25:22 crc kubenswrapper[4784]: W0123 06:25:22.780522 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52e9e151_5768_4b34_9379_d9f15e2dbebc.slice/crio-be16fcd17a0a386038f65d40a8681ab879c11ce289c1ca92d868c61740af944d WatchSource:0}: Error finding container be16fcd17a0a386038f65d40a8681ab879c11ce289c1ca92d868c61740af944d: Status 404 returned error can't find the container with id be16fcd17a0a386038f65d40a8681ab879c11ce289c1ca92d868c61740af944d Jan 23 06:25:23 crc kubenswrapper[4784]: E0123 06:25:23.006550 4784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff is running failed: container process not found" containerID="9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 06:25:23 crc kubenswrapper[4784]: E0123 06:25:23.007207 4784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff is running failed: container process not found" containerID="9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 06:25:23 crc kubenswrapper[4784]: E0123 06:25:23.007474 4784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff is running failed: container process not found" containerID="9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 06:25:23 crc kubenswrapper[4784]: E0123 06:25:23.007509 4784 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-zkrz6" podUID="de559431-551a-4057-96ec-37537d6eddc8" containerName="registry-server" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.094004 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.106307 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" event={"ID":"52e9e151-5768-4b34-9379-d9f15e2dbebc","Type":"ContainerStarted","Data":"d8e1d2927f5074cbfaba0b5f52c49134e5f561605e3a4ddc13a7fc7d7ec0f6bc"} Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.106394 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" event={"ID":"52e9e151-5768-4b34-9379-d9f15e2dbebc","Type":"ContainerStarted","Data":"be16fcd17a0a386038f65d40a8681ab879c11ce289c1ca92d868c61740af944d"} Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.107524 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.108929 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" event={"ID":"1e62733e-611d-412c-aa0d-2b3b040fa621","Type":"ContainerStarted","Data":"d3354889d0891bfb7586a9b9cb48fc7c5567acc754c358f479b678e390d0b4cd"} Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.109955 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" event={"ID":"1e62733e-611d-412c-aa0d-2b3b040fa621","Type":"ContainerStarted","Data":"bec07c2840f860065fe3cd7214520ed23b47cd85463db9bf990a34cc7c536292"} Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.110049 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.112549 4784 generic.go:334] "Generic (PLEG): container finished" podID="de559431-551a-4057-96ec-37537d6eddc8" containerID="9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff" exitCode=0 Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.112583 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkrz6" event={"ID":"de559431-551a-4057-96ec-37537d6eddc8","Type":"ContainerDied","Data":"9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff"} Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.112601 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkrz6" event={"ID":"de559431-551a-4057-96ec-37537d6eddc8","Type":"ContainerDied","Data":"323eeb41926d8e8041f441e15c3b775323ccbcbbb6e7e1b4e14149af617d6e5f"} Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.112621 4784 scope.go:117] "RemoveContainer" containerID="9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.112765 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkrz6" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.113877 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.139392 4784 scope.go:117] "RemoveContainer" containerID="f22f3a8e00397b91b5f8a836ecaa1d0e9d55ba1a3464fc3d9c7323d058e9851c" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.160856 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" podStartSLOduration=1.160832543 podStartE2EDuration="1.160832543s" podCreationTimestamp="2026-01-23 06:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:25:23.15762373 +0000 UTC m=+326.390131714" watchObservedRunningTime="2026-01-23 06:25:23.160832543 +0000 UTC m=+326.393340517" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.165979 4784 scope.go:117] "RemoveContainer" containerID="5f572091df69849c5b15f306dbcda542ec33f9d4e009db067a4bd325c36ceefa" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.178365 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" podStartSLOduration=1.178336102 podStartE2EDuration="1.178336102s" podCreationTimestamp="2026-01-23 06:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:25:23.176006852 +0000 UTC m=+326.408514836" watchObservedRunningTime="2026-01-23 06:25:23.178336102 +0000 UTC m=+326.410844076" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.190374 4784 scope.go:117] "RemoveContainer" containerID="9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff" Jan 23 06:25:23 crc kubenswrapper[4784]: E0123 06:25:23.190837 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff\": container with ID starting with 9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff not found: ID does not exist" containerID="9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.190878 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff"} err="failed to get container status \"9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff\": rpc error: code = NotFound desc = could not find container \"9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff\": container with ID starting with 9c742b40456968dceab66815cd219a5e61510c647b523c0982c60a8d8e166eff not found: ID does not exist" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.190908 4784 scope.go:117] "RemoveContainer" containerID="f22f3a8e00397b91b5f8a836ecaa1d0e9d55ba1a3464fc3d9c7323d058e9851c" Jan 23 06:25:23 crc kubenswrapper[4784]: E0123 06:25:23.192030 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f22f3a8e00397b91b5f8a836ecaa1d0e9d55ba1a3464fc3d9c7323d058e9851c\": container with ID starting with f22f3a8e00397b91b5f8a836ecaa1d0e9d55ba1a3464fc3d9c7323d058e9851c not found: ID does not exist" containerID="f22f3a8e00397b91b5f8a836ecaa1d0e9d55ba1a3464fc3d9c7323d058e9851c" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.192164 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22f3a8e00397b91b5f8a836ecaa1d0e9d55ba1a3464fc3d9c7323d058e9851c"} err="failed to get container status \"f22f3a8e00397b91b5f8a836ecaa1d0e9d55ba1a3464fc3d9c7323d058e9851c\": rpc error: code = NotFound desc = could not find container \"f22f3a8e00397b91b5f8a836ecaa1d0e9d55ba1a3464fc3d9c7323d058e9851c\": container with ID starting with f22f3a8e00397b91b5f8a836ecaa1d0e9d55ba1a3464fc3d9c7323d058e9851c not found: ID does not exist" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.192270 4784 scope.go:117] "RemoveContainer" containerID="5f572091df69849c5b15f306dbcda542ec33f9d4e009db067a4bd325c36ceefa" Jan 23 06:25:23 crc kubenswrapper[4784]: E0123 06:25:23.192641 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f572091df69849c5b15f306dbcda542ec33f9d4e009db067a4bd325c36ceefa\": container with ID starting with 5f572091df69849c5b15f306dbcda542ec33f9d4e009db067a4bd325c36ceefa not found: ID does not exist" containerID="5f572091df69849c5b15f306dbcda542ec33f9d4e009db067a4bd325c36ceefa" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.192728 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f572091df69849c5b15f306dbcda542ec33f9d4e009db067a4bd325c36ceefa"} err="failed to get container status \"5f572091df69849c5b15f306dbcda542ec33f9d4e009db067a4bd325c36ceefa\": rpc error: code = NotFound desc = could not find container \"5f572091df69849c5b15f306dbcda542ec33f9d4e009db067a4bd325c36ceefa\": container with ID starting with 5f572091df69849c5b15f306dbcda542ec33f9d4e009db067a4bd325c36ceefa not found: ID does not exist" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.264412 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8" path="/var/lib/kubelet/pods/96f8ad08-a7e6-4ae5-ac33-e8ca51f1fbb8/volumes" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.265403 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473" path="/var/lib/kubelet/pods/cc39fe75-4af8-4b2c-9d6b-fd4d7e8c6473/volumes" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.286792 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr877\" (UniqueName: \"kubernetes.io/projected/de559431-551a-4057-96ec-37537d6eddc8-kube-api-access-nr877\") pod \"de559431-551a-4057-96ec-37537d6eddc8\" (UID: \"de559431-551a-4057-96ec-37537d6eddc8\") " Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.286873 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de559431-551a-4057-96ec-37537d6eddc8-utilities\") pod \"de559431-551a-4057-96ec-37537d6eddc8\" (UID: \"de559431-551a-4057-96ec-37537d6eddc8\") " Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.287084 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de559431-551a-4057-96ec-37537d6eddc8-catalog-content\") pod \"de559431-551a-4057-96ec-37537d6eddc8\" (UID: \"de559431-551a-4057-96ec-37537d6eddc8\") " Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.288466 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de559431-551a-4057-96ec-37537d6eddc8-utilities" (OuterVolumeSpecName: "utilities") pod "de559431-551a-4057-96ec-37537d6eddc8" (UID: "de559431-551a-4057-96ec-37537d6eddc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.296086 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de559431-551a-4057-96ec-37537d6eddc8-kube-api-access-nr877" (OuterVolumeSpecName: "kube-api-access-nr877") pod "de559431-551a-4057-96ec-37537d6eddc8" (UID: "de559431-551a-4057-96ec-37537d6eddc8"). InnerVolumeSpecName "kube-api-access-nr877". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.388929 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr877\" (UniqueName: \"kubernetes.io/projected/de559431-551a-4057-96ec-37537d6eddc8-kube-api-access-nr877\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.388978 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de559431-551a-4057-96ec-37537d6eddc8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.401820 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de559431-551a-4057-96ec-37537d6eddc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de559431-551a-4057-96ec-37537d6eddc8" (UID: "de559431-551a-4057-96ec-37537d6eddc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.441384 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zkrz6"] Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.444316 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zkrz6"] Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.490122 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de559431-551a-4057-96ec-37537d6eddc8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:23 crc kubenswrapper[4784]: I0123 06:25:23.616164 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:25 crc kubenswrapper[4784]: I0123 06:25:25.262238 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de559431-551a-4057-96ec-37537d6eddc8" path="/var/lib/kubelet/pods/de559431-551a-4057-96ec-37537d6eddc8/volumes" Jan 23 06:25:28 crc kubenswrapper[4784]: I0123 06:25:28.710347 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q"] Jan 23 06:25:28 crc kubenswrapper[4784]: I0123 06:25:28.711176 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" podUID="52e9e151-5768-4b34-9379-d9f15e2dbebc" containerName="route-controller-manager" containerID="cri-o://d8e1d2927f5074cbfaba0b5f52c49134e5f561605e3a4ddc13a7fc7d7ec0f6bc" gracePeriod=30 Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.159732 4784 generic.go:334] "Generic (PLEG): container finished" podID="52e9e151-5768-4b34-9379-d9f15e2dbebc" containerID="d8e1d2927f5074cbfaba0b5f52c49134e5f561605e3a4ddc13a7fc7d7ec0f6bc" exitCode=0 Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.159858 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" event={"ID":"52e9e151-5768-4b34-9379-d9f15e2dbebc","Type":"ContainerDied","Data":"d8e1d2927f5074cbfaba0b5f52c49134e5f561605e3a4ddc13a7fc7d7ec0f6bc"} Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.160211 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" event={"ID":"52e9e151-5768-4b34-9379-d9f15e2dbebc","Type":"ContainerDied","Data":"be16fcd17a0a386038f65d40a8681ab879c11ce289c1ca92d868c61740af944d"} Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.160236 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be16fcd17a0a386038f65d40a8681ab879c11ce289c1ca92d868c61740af944d" Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.186276 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.370588 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52e9e151-5768-4b34-9379-d9f15e2dbebc-config\") pod \"52e9e151-5768-4b34-9379-d9f15e2dbebc\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.370697 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxnhv\" (UniqueName: \"kubernetes.io/projected/52e9e151-5768-4b34-9379-d9f15e2dbebc-kube-api-access-hxnhv\") pod \"52e9e151-5768-4b34-9379-d9f15e2dbebc\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.370725 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52e9e151-5768-4b34-9379-d9f15e2dbebc-client-ca\") pod \"52e9e151-5768-4b34-9379-d9f15e2dbebc\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.370800 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52e9e151-5768-4b34-9379-d9f15e2dbebc-serving-cert\") pod \"52e9e151-5768-4b34-9379-d9f15e2dbebc\" (UID: \"52e9e151-5768-4b34-9379-d9f15e2dbebc\") " Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.374498 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52e9e151-5768-4b34-9379-d9f15e2dbebc-client-ca" (OuterVolumeSpecName: "client-ca") pod "52e9e151-5768-4b34-9379-d9f15e2dbebc" (UID: "52e9e151-5768-4b34-9379-d9f15e2dbebc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.375169 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52e9e151-5768-4b34-9379-d9f15e2dbebc-config" (OuterVolumeSpecName: "config") pod "52e9e151-5768-4b34-9379-d9f15e2dbebc" (UID: "52e9e151-5768-4b34-9379-d9f15e2dbebc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.378634 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52e9e151-5768-4b34-9379-d9f15e2dbebc-kube-api-access-hxnhv" (OuterVolumeSpecName: "kube-api-access-hxnhv") pod "52e9e151-5768-4b34-9379-d9f15e2dbebc" (UID: "52e9e151-5768-4b34-9379-d9f15e2dbebc"). InnerVolumeSpecName "kube-api-access-hxnhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.378874 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52e9e151-5768-4b34-9379-d9f15e2dbebc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "52e9e151-5768-4b34-9379-d9f15e2dbebc" (UID: "52e9e151-5768-4b34-9379-d9f15e2dbebc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.472653 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52e9e151-5768-4b34-9379-d9f15e2dbebc-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.472711 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxnhv\" (UniqueName: \"kubernetes.io/projected/52e9e151-5768-4b34-9379-d9f15e2dbebc-kube-api-access-hxnhv\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.472739 4784 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/52e9e151-5768-4b34-9379-d9f15e2dbebc-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:29 crc kubenswrapper[4784]: I0123 06:25:29.472785 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52e9e151-5768-4b34-9379-d9f15e2dbebc-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.168882 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.206353 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q"] Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.209246 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xq89q"] Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.251868 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg"] Jan 23 06:25:30 crc kubenswrapper[4784]: E0123 06:25:30.252221 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de559431-551a-4057-96ec-37537d6eddc8" containerName="extract-utilities" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.252241 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="de559431-551a-4057-96ec-37537d6eddc8" containerName="extract-utilities" Jan 23 06:25:30 crc kubenswrapper[4784]: E0123 06:25:30.252261 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de559431-551a-4057-96ec-37537d6eddc8" containerName="extract-content" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.252268 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="de559431-551a-4057-96ec-37537d6eddc8" containerName="extract-content" Jan 23 06:25:30 crc kubenswrapper[4784]: E0123 06:25:30.252275 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52e9e151-5768-4b34-9379-d9f15e2dbebc" containerName="route-controller-manager" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.252283 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="52e9e151-5768-4b34-9379-d9f15e2dbebc" containerName="route-controller-manager" Jan 23 06:25:30 crc kubenswrapper[4784]: E0123 06:25:30.252298 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de559431-551a-4057-96ec-37537d6eddc8" containerName="registry-server" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.252307 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="de559431-551a-4057-96ec-37537d6eddc8" containerName="registry-server" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.252432 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="52e9e151-5768-4b34-9379-d9f15e2dbebc" containerName="route-controller-manager" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.252443 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="de559431-551a-4057-96ec-37537d6eddc8" containerName="registry-server" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.252969 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.255980 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.256465 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.256523 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.256657 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.257210 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.257220 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.268696 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg"] Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.386584 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b77d8883-4825-46fe-9790-8dbbb7da611b-serving-cert\") pod \"route-controller-manager-5c588f9d76-6nkxg\" (UID: \"b77d8883-4825-46fe-9790-8dbbb7da611b\") " pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.386657 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b77d8883-4825-46fe-9790-8dbbb7da611b-config\") pod \"route-controller-manager-5c588f9d76-6nkxg\" (UID: \"b77d8883-4825-46fe-9790-8dbbb7da611b\") " pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.386698 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22tnf\" (UniqueName: \"kubernetes.io/projected/b77d8883-4825-46fe-9790-8dbbb7da611b-kube-api-access-22tnf\") pod \"route-controller-manager-5c588f9d76-6nkxg\" (UID: \"b77d8883-4825-46fe-9790-8dbbb7da611b\") " pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.386954 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b77d8883-4825-46fe-9790-8dbbb7da611b-client-ca\") pod \"route-controller-manager-5c588f9d76-6nkxg\" (UID: \"b77d8883-4825-46fe-9790-8dbbb7da611b\") " pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.489178 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22tnf\" (UniqueName: \"kubernetes.io/projected/b77d8883-4825-46fe-9790-8dbbb7da611b-kube-api-access-22tnf\") pod \"route-controller-manager-5c588f9d76-6nkxg\" (UID: \"b77d8883-4825-46fe-9790-8dbbb7da611b\") " pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.489273 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b77d8883-4825-46fe-9790-8dbbb7da611b-client-ca\") pod \"route-controller-manager-5c588f9d76-6nkxg\" (UID: \"b77d8883-4825-46fe-9790-8dbbb7da611b\") " pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.489392 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b77d8883-4825-46fe-9790-8dbbb7da611b-serving-cert\") pod \"route-controller-manager-5c588f9d76-6nkxg\" (UID: \"b77d8883-4825-46fe-9790-8dbbb7da611b\") " pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.489439 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b77d8883-4825-46fe-9790-8dbbb7da611b-config\") pod \"route-controller-manager-5c588f9d76-6nkxg\" (UID: \"b77d8883-4825-46fe-9790-8dbbb7da611b\") " pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.491046 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b77d8883-4825-46fe-9790-8dbbb7da611b-client-ca\") pod \"route-controller-manager-5c588f9d76-6nkxg\" (UID: \"b77d8883-4825-46fe-9790-8dbbb7da611b\") " pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.491161 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b77d8883-4825-46fe-9790-8dbbb7da611b-config\") pod \"route-controller-manager-5c588f9d76-6nkxg\" (UID: \"b77d8883-4825-46fe-9790-8dbbb7da611b\") " pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.496961 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b77d8883-4825-46fe-9790-8dbbb7da611b-serving-cert\") pod \"route-controller-manager-5c588f9d76-6nkxg\" (UID: \"b77d8883-4825-46fe-9790-8dbbb7da611b\") " pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.508080 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22tnf\" (UniqueName: \"kubernetes.io/projected/b77d8883-4825-46fe-9790-8dbbb7da611b-kube-api-access-22tnf\") pod \"route-controller-manager-5c588f9d76-6nkxg\" (UID: \"b77d8883-4825-46fe-9790-8dbbb7da611b\") " pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:30 crc kubenswrapper[4784]: I0123 06:25:30.578600 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:31 crc kubenswrapper[4784]: I0123 06:25:31.017254 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg"] Jan 23 06:25:31 crc kubenswrapper[4784]: I0123 06:25:31.182280 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" event={"ID":"b77d8883-4825-46fe-9790-8dbbb7da611b","Type":"ContainerStarted","Data":"36b378c476faaacbe503e1be576b6e0804d4c222cd69d76ff78a4d732e85cb36"} Jan 23 06:25:31 crc kubenswrapper[4784]: I0123 06:25:31.262882 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52e9e151-5768-4b34-9379-d9f15e2dbebc" path="/var/lib/kubelet/pods/52e9e151-5768-4b34-9379-d9f15e2dbebc/volumes" Jan 23 06:25:32 crc kubenswrapper[4784]: I0123 06:25:32.193511 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" event={"ID":"b77d8883-4825-46fe-9790-8dbbb7da611b","Type":"ContainerStarted","Data":"20df37d02d6d7e60e705e09bf93d9e879c5eb8124b035d325dbb34c4549fb939"} Jan 23 06:25:32 crc kubenswrapper[4784]: I0123 06:25:32.194159 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:32 crc kubenswrapper[4784]: I0123 06:25:32.199954 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" Jan 23 06:25:32 crc kubenswrapper[4784]: I0123 06:25:32.218484 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c588f9d76-6nkxg" podStartSLOduration=4.218452135 podStartE2EDuration="4.218452135s" podCreationTimestamp="2026-01-23 06:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:25:32.212270096 +0000 UTC m=+335.444778070" watchObservedRunningTime="2026-01-23 06:25:32.218452135 +0000 UTC m=+335.450960119" Jan 23 06:25:53 crc kubenswrapper[4784]: I0123 06:25:53.603243 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:25:53 crc kubenswrapper[4784]: I0123 06:25:53.603995 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:26:08 crc kubenswrapper[4784]: I0123 06:26:08.757713 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-s84wx"] Jan 23 06:26:08 crc kubenswrapper[4784]: I0123 06:26:08.758532 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" podUID="1e62733e-611d-412c-aa0d-2b3b040fa621" containerName="controller-manager" containerID="cri-o://d3354889d0891bfb7586a9b9cb48fc7c5567acc754c358f479b678e390d0b4cd" gracePeriod=30 Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.194814 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.307977 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-proxy-ca-bundles\") pod \"1e62733e-611d-412c-aa0d-2b3b040fa621\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.308066 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-config\") pod \"1e62733e-611d-412c-aa0d-2b3b040fa621\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.308102 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-client-ca\") pod \"1e62733e-611d-412c-aa0d-2b3b040fa621\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.308188 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e62733e-611d-412c-aa0d-2b3b040fa621-serving-cert\") pod \"1e62733e-611d-412c-aa0d-2b3b040fa621\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.308225 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6fbz\" (UniqueName: \"kubernetes.io/projected/1e62733e-611d-412c-aa0d-2b3b040fa621-kube-api-access-k6fbz\") pod \"1e62733e-611d-412c-aa0d-2b3b040fa621\" (UID: \"1e62733e-611d-412c-aa0d-2b3b040fa621\") " Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.309022 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1e62733e-611d-412c-aa0d-2b3b040fa621" (UID: "1e62733e-611d-412c-aa0d-2b3b040fa621"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.309207 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-config" (OuterVolumeSpecName: "config") pod "1e62733e-611d-412c-aa0d-2b3b040fa621" (UID: "1e62733e-611d-412c-aa0d-2b3b040fa621"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.309669 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-client-ca" (OuterVolumeSpecName: "client-ca") pod "1e62733e-611d-412c-aa0d-2b3b040fa621" (UID: "1e62733e-611d-412c-aa0d-2b3b040fa621"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.315949 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e62733e-611d-412c-aa0d-2b3b040fa621-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1e62733e-611d-412c-aa0d-2b3b040fa621" (UID: "1e62733e-611d-412c-aa0d-2b3b040fa621"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.316340 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e62733e-611d-412c-aa0d-2b3b040fa621-kube-api-access-k6fbz" (OuterVolumeSpecName: "kube-api-access-k6fbz") pod "1e62733e-611d-412c-aa0d-2b3b040fa621" (UID: "1e62733e-611d-412c-aa0d-2b3b040fa621"). InnerVolumeSpecName "kube-api-access-k6fbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.411028 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.411089 4784 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.411105 4784 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e62733e-611d-412c-aa0d-2b3b040fa621-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.411118 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6fbz\" (UniqueName: \"kubernetes.io/projected/1e62733e-611d-412c-aa0d-2b3b040fa621-kube-api-access-k6fbz\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.411134 4784 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e62733e-611d-412c-aa0d-2b3b040fa621-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.452845 4784 generic.go:334] "Generic (PLEG): container finished" podID="1e62733e-611d-412c-aa0d-2b3b040fa621" containerID="d3354889d0891bfb7586a9b9cb48fc7c5567acc754c358f479b678e390d0b4cd" exitCode=0 Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.452916 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" event={"ID":"1e62733e-611d-412c-aa0d-2b3b040fa621","Type":"ContainerDied","Data":"d3354889d0891bfb7586a9b9cb48fc7c5567acc754c358f479b678e390d0b4cd"} Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.452963 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" event={"ID":"1e62733e-611d-412c-aa0d-2b3b040fa621","Type":"ContainerDied","Data":"bec07c2840f860065fe3cd7214520ed23b47cd85463db9bf990a34cc7c536292"} Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.452997 4784 scope.go:117] "RemoveContainer" containerID="d3354889d0891bfb7586a9b9cb48fc7c5567acc754c358f479b678e390d0b4cd" Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.453183 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7775888cf-s84wx" Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.471515 4784 scope.go:117] "RemoveContainer" containerID="d3354889d0891bfb7586a9b9cb48fc7c5567acc754c358f479b678e390d0b4cd" Jan 23 06:26:09 crc kubenswrapper[4784]: E0123 06:26:09.472162 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3354889d0891bfb7586a9b9cb48fc7c5567acc754c358f479b678e390d0b4cd\": container with ID starting with d3354889d0891bfb7586a9b9cb48fc7c5567acc754c358f479b678e390d0b4cd not found: ID does not exist" containerID="d3354889d0891bfb7586a9b9cb48fc7c5567acc754c358f479b678e390d0b4cd" Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.472202 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3354889d0891bfb7586a9b9cb48fc7c5567acc754c358f479b678e390d0b4cd"} err="failed to get container status \"d3354889d0891bfb7586a9b9cb48fc7c5567acc754c358f479b678e390d0b4cd\": rpc error: code = NotFound desc = could not find container \"d3354889d0891bfb7586a9b9cb48fc7c5567acc754c358f479b678e390d0b4cd\": container with ID starting with d3354889d0891bfb7586a9b9cb48fc7c5567acc754c358f479b678e390d0b4cd not found: ID does not exist" Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.494301 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-s84wx"] Jan 23 06:26:09 crc kubenswrapper[4784]: I0123 06:26:09.498105 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-s84wx"] Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.283376 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d5d5977cb-wwrnl"] Jan 23 06:26:10 crc kubenswrapper[4784]: E0123 06:26:10.283778 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e62733e-611d-412c-aa0d-2b3b040fa621" containerName="controller-manager" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.283798 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e62733e-611d-412c-aa0d-2b3b040fa621" containerName="controller-manager" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.283950 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e62733e-611d-412c-aa0d-2b3b040fa621" containerName="controller-manager" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.284612 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.287731 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.289566 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.289878 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.290065 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.290320 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.291238 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.293034 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d5d5977cb-wwrnl"] Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.298427 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.426692 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pqjp\" (UniqueName: \"kubernetes.io/projected/4cf2e2b0-09d4-411b-9a83-1b3b409368be-kube-api-access-7pqjp\") pod \"controller-manager-d5d5977cb-wwrnl\" (UID: \"4cf2e2b0-09d4-411b-9a83-1b3b409368be\") " pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.427159 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cf2e2b0-09d4-411b-9a83-1b3b409368be-client-ca\") pod \"controller-manager-d5d5977cb-wwrnl\" (UID: \"4cf2e2b0-09d4-411b-9a83-1b3b409368be\") " pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.427358 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cf2e2b0-09d4-411b-9a83-1b3b409368be-serving-cert\") pod \"controller-manager-d5d5977cb-wwrnl\" (UID: \"4cf2e2b0-09d4-411b-9a83-1b3b409368be\") " pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.427467 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cf2e2b0-09d4-411b-9a83-1b3b409368be-proxy-ca-bundles\") pod \"controller-manager-d5d5977cb-wwrnl\" (UID: \"4cf2e2b0-09d4-411b-9a83-1b3b409368be\") " pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.427610 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cf2e2b0-09d4-411b-9a83-1b3b409368be-config\") pod \"controller-manager-d5d5977cb-wwrnl\" (UID: \"4cf2e2b0-09d4-411b-9a83-1b3b409368be\") " pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.528968 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cf2e2b0-09d4-411b-9a83-1b3b409368be-config\") pod \"controller-manager-d5d5977cb-wwrnl\" (UID: \"4cf2e2b0-09d4-411b-9a83-1b3b409368be\") " pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.529087 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pqjp\" (UniqueName: \"kubernetes.io/projected/4cf2e2b0-09d4-411b-9a83-1b3b409368be-kube-api-access-7pqjp\") pod \"controller-manager-d5d5977cb-wwrnl\" (UID: \"4cf2e2b0-09d4-411b-9a83-1b3b409368be\") " pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.529167 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cf2e2b0-09d4-411b-9a83-1b3b409368be-client-ca\") pod \"controller-manager-d5d5977cb-wwrnl\" (UID: \"4cf2e2b0-09d4-411b-9a83-1b3b409368be\") " pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.529213 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cf2e2b0-09d4-411b-9a83-1b3b409368be-serving-cert\") pod \"controller-manager-d5d5977cb-wwrnl\" (UID: \"4cf2e2b0-09d4-411b-9a83-1b3b409368be\") " pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.529240 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cf2e2b0-09d4-411b-9a83-1b3b409368be-proxy-ca-bundles\") pod \"controller-manager-d5d5977cb-wwrnl\" (UID: \"4cf2e2b0-09d4-411b-9a83-1b3b409368be\") " pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.530338 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cf2e2b0-09d4-411b-9a83-1b3b409368be-client-ca\") pod \"controller-manager-d5d5977cb-wwrnl\" (UID: \"4cf2e2b0-09d4-411b-9a83-1b3b409368be\") " pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.530493 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cf2e2b0-09d4-411b-9a83-1b3b409368be-config\") pod \"controller-manager-d5d5977cb-wwrnl\" (UID: \"4cf2e2b0-09d4-411b-9a83-1b3b409368be\") " pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.531148 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4cf2e2b0-09d4-411b-9a83-1b3b409368be-proxy-ca-bundles\") pod \"controller-manager-d5d5977cb-wwrnl\" (UID: \"4cf2e2b0-09d4-411b-9a83-1b3b409368be\") " pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.535947 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cf2e2b0-09d4-411b-9a83-1b3b409368be-serving-cert\") pod \"controller-manager-d5d5977cb-wwrnl\" (UID: \"4cf2e2b0-09d4-411b-9a83-1b3b409368be\") " pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.551686 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pqjp\" (UniqueName: \"kubernetes.io/projected/4cf2e2b0-09d4-411b-9a83-1b3b409368be-kube-api-access-7pqjp\") pod \"controller-manager-d5d5977cb-wwrnl\" (UID: \"4cf2e2b0-09d4-411b-9a83-1b3b409368be\") " pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:10 crc kubenswrapper[4784]: I0123 06:26:10.643729 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:11 crc kubenswrapper[4784]: I0123 06:26:11.097504 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d5d5977cb-wwrnl"] Jan 23 06:26:11 crc kubenswrapper[4784]: I0123 06:26:11.261515 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e62733e-611d-412c-aa0d-2b3b040fa621" path="/var/lib/kubelet/pods/1e62733e-611d-412c-aa0d-2b3b040fa621/volumes" Jan 23 06:26:11 crc kubenswrapper[4784]: I0123 06:26:11.471783 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" event={"ID":"4cf2e2b0-09d4-411b-9a83-1b3b409368be","Type":"ContainerStarted","Data":"8625052bb0955bc87f45ad4e831f7c7578bb8a47477a61db8467c2de14abb02a"} Jan 23 06:26:11 crc kubenswrapper[4784]: I0123 06:26:11.471867 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" event={"ID":"4cf2e2b0-09d4-411b-9a83-1b3b409368be","Type":"ContainerStarted","Data":"2744f72366d3390057d5967388c9e36b256a5b73e102330caa7700af0c9efed8"} Jan 23 06:26:11 crc kubenswrapper[4784]: I0123 06:26:11.472088 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:11 crc kubenswrapper[4784]: I0123 06:26:11.488363 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 06:26:11 crc kubenswrapper[4784]: I0123 06:26:11.500515 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" podStartSLOduration=3.500489827 podStartE2EDuration="3.500489827s" podCreationTimestamp="2026-01-23 06:26:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:26:11.495944275 +0000 UTC m=+374.728452239" watchObservedRunningTime="2026-01-23 06:26:11.500489827 +0000 UTC m=+374.732997801" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.092612 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5vfdf"] Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.094669 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5vfdf" podUID="64c4e525-6765-4230-b129-3364819dfa47" containerName="registry-server" containerID="cri-o://202ab7f6b11fdbf49b5af60ff308819d46c01ff7a19b75b4deaf936da1ac6205" gracePeriod=30 Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.102527 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h2bnm"] Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.102944 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h2bnm" podUID="4bf2bb81-ee53-475a-9648-987ec2d1adb2" containerName="registry-server" containerID="cri-o://7e468db067e2bf1c6e3a73d40165b87bf320d022c181546e5cfd600002d12fcf" gracePeriod=30 Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.111851 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n9rnn"] Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.112188 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" podUID="dc93f303-432c-4487-a225-f0af2fa5bd49" containerName="marketplace-operator" containerID="cri-o://f82b81c982a4c6782b144641d93252d830ab34b267f9dbefa3dc687bca3bf511" gracePeriod=30 Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.116734 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2rk5j"] Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.117120 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2rk5j" podUID="9a96ea8e-c45f-4799-886a-aef90c8b8e1a" containerName="registry-server" containerID="cri-o://a4a06a1a4de6bb96ecb6204f8ff993dae305732cf8e6acb3f982a6832ee43cb0" gracePeriod=30 Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.145829 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-srwlm"] Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.146940 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.151841 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6vbpl"] Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.153152 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6vbpl" podUID="f2daccf7-5481-4092-a720-045f3e033b62" containerName="registry-server" containerID="cri-o://2ea00f16a1ba13f077f3571aaf1d472b5169b22b4cd6bb4c5b9f6b8bbbf609e0" gracePeriod=30 Jan 23 06:26:21 crc kubenswrapper[4784]: E0123 06:26:21.154907 4784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a4a06a1a4de6bb96ecb6204f8ff993dae305732cf8e6acb3f982a6832ee43cb0" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 06:26:21 crc kubenswrapper[4784]: E0123 06:26:21.165556 4784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a4a06a1a4de6bb96ecb6204f8ff993dae305732cf8e6acb3f982a6832ee43cb0 is running failed: container process not found" containerID="a4a06a1a4de6bb96ecb6204f8ff993dae305732cf8e6acb3f982a6832ee43cb0" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 06:26:21 crc kubenswrapper[4784]: E0123 06:26:21.167873 4784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a4a06a1a4de6bb96ecb6204f8ff993dae305732cf8e6acb3f982a6832ee43cb0 is running failed: container process not found" containerID="a4a06a1a4de6bb96ecb6204f8ff993dae305732cf8e6acb3f982a6832ee43cb0" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 06:26:21 crc kubenswrapper[4784]: E0123 06:26:21.168105 4784 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a4a06a1a4de6bb96ecb6204f8ff993dae305732cf8e6acb3f982a6832ee43cb0 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-2rk5j" podUID="9a96ea8e-c45f-4799-886a-aef90c8b8e1a" containerName="registry-server" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.170230 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-srwlm"] Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.192937 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4180fe07-d016-4462-8f55-9da994cc6827-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-srwlm\" (UID: \"4180fe07-d016-4462-8f55-9da994cc6827\") " pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.193033 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmzgx\" (UniqueName: \"kubernetes.io/projected/4180fe07-d016-4462-8f55-9da994cc6827-kube-api-access-mmzgx\") pod \"marketplace-operator-79b997595-srwlm\" (UID: \"4180fe07-d016-4462-8f55-9da994cc6827\") " pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.193066 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4180fe07-d016-4462-8f55-9da994cc6827-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-srwlm\" (UID: \"4180fe07-d016-4462-8f55-9da994cc6827\") " pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.295605 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmzgx\" (UniqueName: \"kubernetes.io/projected/4180fe07-d016-4462-8f55-9da994cc6827-kube-api-access-mmzgx\") pod \"marketplace-operator-79b997595-srwlm\" (UID: \"4180fe07-d016-4462-8f55-9da994cc6827\") " pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.295704 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4180fe07-d016-4462-8f55-9da994cc6827-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-srwlm\" (UID: \"4180fe07-d016-4462-8f55-9da994cc6827\") " pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.295810 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4180fe07-d016-4462-8f55-9da994cc6827-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-srwlm\" (UID: \"4180fe07-d016-4462-8f55-9da994cc6827\") " pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.303310 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4180fe07-d016-4462-8f55-9da994cc6827-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-srwlm\" (UID: \"4180fe07-d016-4462-8f55-9da994cc6827\") " pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.306972 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4180fe07-d016-4462-8f55-9da994cc6827-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-srwlm\" (UID: \"4180fe07-d016-4462-8f55-9da994cc6827\") " pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.318818 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmzgx\" (UniqueName: \"kubernetes.io/projected/4180fe07-d016-4462-8f55-9da994cc6827-kube-api-access-mmzgx\") pod \"marketplace-operator-79b997595-srwlm\" (UID: \"4180fe07-d016-4462-8f55-9da994cc6827\") " pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.485910 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.549592 4784 generic.go:334] "Generic (PLEG): container finished" podID="f2daccf7-5481-4092-a720-045f3e033b62" containerID="2ea00f16a1ba13f077f3571aaf1d472b5169b22b4cd6bb4c5b9f6b8bbbf609e0" exitCode=0 Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.549955 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vbpl" event={"ID":"f2daccf7-5481-4092-a720-045f3e033b62","Type":"ContainerDied","Data":"2ea00f16a1ba13f077f3571aaf1d472b5169b22b4cd6bb4c5b9f6b8bbbf609e0"} Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.554416 4784 generic.go:334] "Generic (PLEG): container finished" podID="9a96ea8e-c45f-4799-886a-aef90c8b8e1a" containerID="a4a06a1a4de6bb96ecb6204f8ff993dae305732cf8e6acb3f982a6832ee43cb0" exitCode=0 Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.554525 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rk5j" event={"ID":"9a96ea8e-c45f-4799-886a-aef90c8b8e1a","Type":"ContainerDied","Data":"a4a06a1a4de6bb96ecb6204f8ff993dae305732cf8e6acb3f982a6832ee43cb0"} Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.574575 4784 generic.go:334] "Generic (PLEG): container finished" podID="64c4e525-6765-4230-b129-3364819dfa47" containerID="202ab7f6b11fdbf49b5af60ff308819d46c01ff7a19b75b4deaf936da1ac6205" exitCode=0 Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.574668 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vfdf" event={"ID":"64c4e525-6765-4230-b129-3364819dfa47","Type":"ContainerDied","Data":"202ab7f6b11fdbf49b5af60ff308819d46c01ff7a19b75b4deaf936da1ac6205"} Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.576766 4784 generic.go:334] "Generic (PLEG): container finished" podID="dc93f303-432c-4487-a225-f0af2fa5bd49" containerID="f82b81c982a4c6782b144641d93252d830ab34b267f9dbefa3dc687bca3bf511" exitCode=0 Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.576774 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" event={"ID":"dc93f303-432c-4487-a225-f0af2fa5bd49","Type":"ContainerDied","Data":"f82b81c982a4c6782b144641d93252d830ab34b267f9dbefa3dc687bca3bf511"} Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.576877 4784 scope.go:117] "RemoveContainer" containerID="d4600f59bb969bb390239da2a85643bf146a362b38c67f7da24229e4ef52f2bf" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.581125 4784 generic.go:334] "Generic (PLEG): container finished" podID="4bf2bb81-ee53-475a-9648-987ec2d1adb2" containerID="7e468db067e2bf1c6e3a73d40165b87bf320d022c181546e5cfd600002d12fcf" exitCode=0 Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.581159 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2bnm" event={"ID":"4bf2bb81-ee53-475a-9648-987ec2d1adb2","Type":"ContainerDied","Data":"7e468db067e2bf1c6e3a73d40165b87bf320d022c181546e5cfd600002d12fcf"} Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.710575 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.807787 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf2bb81-ee53-475a-9648-987ec2d1adb2-catalog-content\") pod \"4bf2bb81-ee53-475a-9648-987ec2d1adb2\" (UID: \"4bf2bb81-ee53-475a-9648-987ec2d1adb2\") " Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.807921 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf2bb81-ee53-475a-9648-987ec2d1adb2-utilities\") pod \"4bf2bb81-ee53-475a-9648-987ec2d1adb2\" (UID: \"4bf2bb81-ee53-475a-9648-987ec2d1adb2\") " Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.808033 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fmdv\" (UniqueName: \"kubernetes.io/projected/4bf2bb81-ee53-475a-9648-987ec2d1adb2-kube-api-access-5fmdv\") pod \"4bf2bb81-ee53-475a-9648-987ec2d1adb2\" (UID: \"4bf2bb81-ee53-475a-9648-987ec2d1adb2\") " Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.809782 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf2bb81-ee53-475a-9648-987ec2d1adb2-utilities" (OuterVolumeSpecName: "utilities") pod "4bf2bb81-ee53-475a-9648-987ec2d1adb2" (UID: "4bf2bb81-ee53-475a-9648-987ec2d1adb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.814466 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf2bb81-ee53-475a-9648-987ec2d1adb2-kube-api-access-5fmdv" (OuterVolumeSpecName: "kube-api-access-5fmdv") pod "4bf2bb81-ee53-475a-9648-987ec2d1adb2" (UID: "4bf2bb81-ee53-475a-9648-987ec2d1adb2"). InnerVolumeSpecName "kube-api-access-5fmdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.874594 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.902110 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.905743 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf2bb81-ee53-475a-9648-987ec2d1adb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bf2bb81-ee53-475a-9648-987ec2d1adb2" (UID: "4bf2bb81-ee53-475a-9648-987ec2d1adb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.909887 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bf2bb81-ee53-475a-9648-987ec2d1adb2-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.909923 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fmdv\" (UniqueName: \"kubernetes.io/projected/4bf2bb81-ee53-475a-9648-987ec2d1adb2-kube-api-access-5fmdv\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.909937 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bf2bb81-ee53-475a-9648-987ec2d1adb2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.948194 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:26:21 crc kubenswrapper[4784]: I0123 06:26:21.958287 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.010789 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-utilities\") pod \"9a96ea8e-c45f-4799-886a-aef90c8b8e1a\" (UID: \"9a96ea8e-c45f-4799-886a-aef90c8b8e1a\") " Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.010889 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phs62\" (UniqueName: \"kubernetes.io/projected/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-kube-api-access-phs62\") pod \"9a96ea8e-c45f-4799-886a-aef90c8b8e1a\" (UID: \"9a96ea8e-c45f-4799-886a-aef90c8b8e1a\") " Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.010959 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwt2b\" (UniqueName: \"kubernetes.io/projected/f2daccf7-5481-4092-a720-045f3e033b62-kube-api-access-vwt2b\") pod \"f2daccf7-5481-4092-a720-045f3e033b62\" (UID: \"f2daccf7-5481-4092-a720-045f3e033b62\") " Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.010983 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2daccf7-5481-4092-a720-045f3e033b62-catalog-content\") pod \"f2daccf7-5481-4092-a720-045f3e033b62\" (UID: \"f2daccf7-5481-4092-a720-045f3e033b62\") " Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.011197 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2daccf7-5481-4092-a720-045f3e033b62-utilities\") pod \"f2daccf7-5481-4092-a720-045f3e033b62\" (UID: \"f2daccf7-5481-4092-a720-045f3e033b62\") " Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.011220 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-catalog-content\") pod \"9a96ea8e-c45f-4799-886a-aef90c8b8e1a\" (UID: \"9a96ea8e-c45f-4799-886a-aef90c8b8e1a\") " Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.012576 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-utilities" (OuterVolumeSpecName: "utilities") pod "9a96ea8e-c45f-4799-886a-aef90c8b8e1a" (UID: "9a96ea8e-c45f-4799-886a-aef90c8b8e1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.012595 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2daccf7-5481-4092-a720-045f3e033b62-utilities" (OuterVolumeSpecName: "utilities") pod "f2daccf7-5481-4092-a720-045f3e033b62" (UID: "f2daccf7-5481-4092-a720-045f3e033b62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.014541 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-kube-api-access-phs62" (OuterVolumeSpecName: "kube-api-access-phs62") pod "9a96ea8e-c45f-4799-886a-aef90c8b8e1a" (UID: "9a96ea8e-c45f-4799-886a-aef90c8b8e1a"). InnerVolumeSpecName "kube-api-access-phs62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.014625 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2daccf7-5481-4092-a720-045f3e033b62-kube-api-access-vwt2b" (OuterVolumeSpecName: "kube-api-access-vwt2b") pod "f2daccf7-5481-4092-a720-045f3e033b62" (UID: "f2daccf7-5481-4092-a720-045f3e033b62"). InnerVolumeSpecName "kube-api-access-vwt2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.031997 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a96ea8e-c45f-4799-886a-aef90c8b8e1a" (UID: "9a96ea8e-c45f-4799-886a-aef90c8b8e1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.112247 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c4e525-6765-4230-b129-3364819dfa47-utilities\") pod \"64c4e525-6765-4230-b129-3364819dfa47\" (UID: \"64c4e525-6765-4230-b129-3364819dfa47\") " Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.113396 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljpp4\" (UniqueName: \"kubernetes.io/projected/dc93f303-432c-4487-a225-f0af2fa5bd49-kube-api-access-ljpp4\") pod \"dc93f303-432c-4487-a225-f0af2fa5bd49\" (UID: \"dc93f303-432c-4487-a225-f0af2fa5bd49\") " Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.113502 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc93f303-432c-4487-a225-f0af2fa5bd49-marketplace-trusted-ca\") pod \"dc93f303-432c-4487-a225-f0af2fa5bd49\" (UID: \"dc93f303-432c-4487-a225-f0af2fa5bd49\") " Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.113180 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64c4e525-6765-4230-b129-3364819dfa47-utilities" (OuterVolumeSpecName: "utilities") pod "64c4e525-6765-4230-b129-3364819dfa47" (UID: "64c4e525-6765-4230-b129-3364819dfa47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.113576 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r2dr\" (UniqueName: \"kubernetes.io/projected/64c4e525-6765-4230-b129-3364819dfa47-kube-api-access-4r2dr\") pod \"64c4e525-6765-4230-b129-3364819dfa47\" (UID: \"64c4e525-6765-4230-b129-3364819dfa47\") " Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.113659 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dc93f303-432c-4487-a225-f0af2fa5bd49-marketplace-operator-metrics\") pod \"dc93f303-432c-4487-a225-f0af2fa5bd49\" (UID: \"dc93f303-432c-4487-a225-f0af2fa5bd49\") " Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.113698 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c4e525-6765-4230-b129-3364819dfa47-catalog-content\") pod \"64c4e525-6765-4230-b129-3364819dfa47\" (UID: \"64c4e525-6765-4230-b129-3364819dfa47\") " Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.113910 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwt2b\" (UniqueName: \"kubernetes.io/projected/f2daccf7-5481-4092-a720-045f3e033b62-kube-api-access-vwt2b\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.113936 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2daccf7-5481-4092-a720-045f3e033b62-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.113951 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.113963 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c4e525-6765-4230-b129-3364819dfa47-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.113975 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.113988 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phs62\" (UniqueName: \"kubernetes.io/projected/9a96ea8e-c45f-4799-886a-aef90c8b8e1a-kube-api-access-phs62\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.114537 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc93f303-432c-4487-a225-f0af2fa5bd49-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "dc93f303-432c-4487-a225-f0af2fa5bd49" (UID: "dc93f303-432c-4487-a225-f0af2fa5bd49"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.116859 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c4e525-6765-4230-b129-3364819dfa47-kube-api-access-4r2dr" (OuterVolumeSpecName: "kube-api-access-4r2dr") pod "64c4e525-6765-4230-b129-3364819dfa47" (UID: "64c4e525-6765-4230-b129-3364819dfa47"). InnerVolumeSpecName "kube-api-access-4r2dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.118026 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc93f303-432c-4487-a225-f0af2fa5bd49-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "dc93f303-432c-4487-a225-f0af2fa5bd49" (UID: "dc93f303-432c-4487-a225-f0af2fa5bd49"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.118195 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc93f303-432c-4487-a225-f0af2fa5bd49-kube-api-access-ljpp4" (OuterVolumeSpecName: "kube-api-access-ljpp4") pod "dc93f303-432c-4487-a225-f0af2fa5bd49" (UID: "dc93f303-432c-4487-a225-f0af2fa5bd49"). InnerVolumeSpecName "kube-api-access-ljpp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.134829 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2daccf7-5481-4092-a720-045f3e033b62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2daccf7-5481-4092-a720-045f3e033b62" (UID: "f2daccf7-5481-4092-a720-045f3e033b62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.167811 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-srwlm"] Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.195484 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64c4e525-6765-4230-b129-3364819dfa47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64c4e525-6765-4230-b129-3364819dfa47" (UID: "64c4e525-6765-4230-b129-3364819dfa47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.215662 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljpp4\" (UniqueName: \"kubernetes.io/projected/dc93f303-432c-4487-a225-f0af2fa5bd49-kube-api-access-ljpp4\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.215745 4784 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc93f303-432c-4487-a225-f0af2fa5bd49-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.215797 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r2dr\" (UniqueName: \"kubernetes.io/projected/64c4e525-6765-4230-b129-3364819dfa47-kube-api-access-4r2dr\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.215812 4784 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dc93f303-432c-4487-a225-f0af2fa5bd49-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.215830 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c4e525-6765-4230-b129-3364819dfa47-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.215866 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2daccf7-5481-4092-a720-045f3e033b62-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.589526 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h2bnm" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.589516 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2bnm" event={"ID":"4bf2bb81-ee53-475a-9648-987ec2d1adb2","Type":"ContainerDied","Data":"05110e7b361821c9f1d53ff6618a2b7556732967047c2a0f56d22b24e1cff8ea"} Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.590122 4784 scope.go:117] "RemoveContainer" containerID="7e468db067e2bf1c6e3a73d40165b87bf320d022c181546e5cfd600002d12fcf" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.592007 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" event={"ID":"4180fe07-d016-4462-8f55-9da994cc6827","Type":"ContainerStarted","Data":"f8cdd75326fd57b6ab9a27c8e461e4d7d2f057561da89c2d0e939869dc28f570"} Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.592227 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" event={"ID":"4180fe07-d016-4462-8f55-9da994cc6827","Type":"ContainerStarted","Data":"9810d516474423207b2f2b098c2e4889e50e7136a18b8c8f30c3da6bf4159f33"} Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.595031 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vbpl" event={"ID":"f2daccf7-5481-4092-a720-045f3e033b62","Type":"ContainerDied","Data":"264ec505d70a5404b755dbd0324055d4e5027c61ba1caad48a4866e0ec3b98a1"} Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.595053 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vbpl" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.599290 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2rk5j" event={"ID":"9a96ea8e-c45f-4799-886a-aef90c8b8e1a","Type":"ContainerDied","Data":"7c311f689a9fb56a213d3ee08d7fa607c57af2e3d31aa819ffd7c8650316f48f"} Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.599734 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2rk5j" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.602013 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vfdf" event={"ID":"64c4e525-6765-4230-b129-3364819dfa47","Type":"ContainerDied","Data":"06fee6ff948d4d94b95a8623efc916450237bc89c5e006adb208bde703a7c86e"} Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.602485 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vfdf" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.607558 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" event={"ID":"dc93f303-432c-4487-a225-f0af2fa5bd49","Type":"ContainerDied","Data":"a122d69361dc583a6ddf8788ab5e015e46d98606cd5a471d2bbda0d9bcce972c"} Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.607684 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n9rnn" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.611972 4784 scope.go:117] "RemoveContainer" containerID="aeb2d829b569c11d2ce05e50936d51b3fcb12045bbe9f75aa574e5c6c8d8c814" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.642679 4784 scope.go:117] "RemoveContainer" containerID="bdfd7c8aef631e24f1ac9d2b5919f98e5ff37429e03b3ce83b6c86397e56de63" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.642677 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" podStartSLOduration=1.642651432 podStartE2EDuration="1.642651432s" podCreationTimestamp="2026-01-23 06:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:26:22.622887093 +0000 UTC m=+385.855395067" watchObservedRunningTime="2026-01-23 06:26:22.642651432 +0000 UTC m=+385.875159406" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.650140 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h2bnm"] Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.654951 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h2bnm"] Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.671695 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6vbpl"] Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.672376 4784 scope.go:117] "RemoveContainer" containerID="2ea00f16a1ba13f077f3571aaf1d472b5169b22b4cd6bb4c5b9f6b8bbbf609e0" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.679170 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6vbpl"] Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.689034 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n9rnn"] Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.694049 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n9rnn"] Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.701651 4784 scope.go:117] "RemoveContainer" containerID="ef854161c7cb65566e9af2a5187bf29536ecc8b42d7c3abf8275cb4ea0b987ea" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.709262 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2rk5j"] Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.718031 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2rk5j"] Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.722460 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5vfdf"] Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.727189 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5vfdf"] Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.728694 4784 scope.go:117] "RemoveContainer" containerID="5881eafaff6a9ec6c0a067a2845c8894841e1044e709d1d2dcef2d9ec73a26ee" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.752038 4784 scope.go:117] "RemoveContainer" containerID="a4a06a1a4de6bb96ecb6204f8ff993dae305732cf8e6acb3f982a6832ee43cb0" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.772856 4784 scope.go:117] "RemoveContainer" containerID="5bc6e3fe5a4b60082275b4a899db38d7f6345e55d38477be9db17282f84e8d4d" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.789484 4784 scope.go:117] "RemoveContainer" containerID="5cb2946608e7f3fdb1521687baa4868eb59b42189671e22b1afb75e1b26e750e" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.806665 4784 scope.go:117] "RemoveContainer" containerID="202ab7f6b11fdbf49b5af60ff308819d46c01ff7a19b75b4deaf936da1ac6205" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.821703 4784 scope.go:117] "RemoveContainer" containerID="ddbef208c73165a7a82c70565f5c1ddeeb992faf867726b5ab31158d1fcdaa2f" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.858257 4784 scope.go:117] "RemoveContainer" containerID="aefb9d37a2f550d5074d6dbe0749570b6a1e6d31edcabfc29687498f4f61ce3d" Jan 23 06:26:22 crc kubenswrapper[4784]: I0123 06:26:22.875979 4784 scope.go:117] "RemoveContainer" containerID="f82b81c982a4c6782b144641d93252d830ab34b267f9dbefa3dc687bca3bf511" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.269856 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf2bb81-ee53-475a-9648-987ec2d1adb2" path="/var/lib/kubelet/pods/4bf2bb81-ee53-475a-9648-987ec2d1adb2/volumes" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.271468 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64c4e525-6765-4230-b129-3364819dfa47" path="/var/lib/kubelet/pods/64c4e525-6765-4230-b129-3364819dfa47/volumes" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.272891 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a96ea8e-c45f-4799-886a-aef90c8b8e1a" path="/var/lib/kubelet/pods/9a96ea8e-c45f-4799-886a-aef90c8b8e1a/volumes" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.275314 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc93f303-432c-4487-a225-f0af2fa5bd49" path="/var/lib/kubelet/pods/dc93f303-432c-4487-a225-f0af2fa5bd49/volumes" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.276296 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2daccf7-5481-4092-a720-045f3e033b62" path="/var/lib/kubelet/pods/f2daccf7-5481-4092-a720-045f3e033b62/volumes" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.323161 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dtrsz"] Jan 23 06:26:23 crc kubenswrapper[4784]: E0123 06:26:23.323985 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a96ea8e-c45f-4799-886a-aef90c8b8e1a" containerName="extract-utilities" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.324155 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a96ea8e-c45f-4799-886a-aef90c8b8e1a" containerName="extract-utilities" Jan 23 06:26:23 crc kubenswrapper[4784]: E0123 06:26:23.324296 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf2bb81-ee53-475a-9648-987ec2d1adb2" containerName="extract-utilities" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.324456 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf2bb81-ee53-475a-9648-987ec2d1adb2" containerName="extract-utilities" Jan 23 06:26:23 crc kubenswrapper[4784]: E0123 06:26:23.324624 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c4e525-6765-4230-b129-3364819dfa47" containerName="extract-content" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.324781 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c4e525-6765-4230-b129-3364819dfa47" containerName="extract-content" Jan 23 06:26:23 crc kubenswrapper[4784]: E0123 06:26:23.324907 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c4e525-6765-4230-b129-3364819dfa47" containerName="registry-server" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.325037 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c4e525-6765-4230-b129-3364819dfa47" containerName="registry-server" Jan 23 06:26:23 crc kubenswrapper[4784]: E0123 06:26:23.325164 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc93f303-432c-4487-a225-f0af2fa5bd49" containerName="marketplace-operator" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.325325 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc93f303-432c-4487-a225-f0af2fa5bd49" containerName="marketplace-operator" Jan 23 06:26:23 crc kubenswrapper[4784]: E0123 06:26:23.325453 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a96ea8e-c45f-4799-886a-aef90c8b8e1a" containerName="registry-server" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.325577 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a96ea8e-c45f-4799-886a-aef90c8b8e1a" containerName="registry-server" Jan 23 06:26:23 crc kubenswrapper[4784]: E0123 06:26:23.325690 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2daccf7-5481-4092-a720-045f3e033b62" containerName="extract-content" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.325854 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2daccf7-5481-4092-a720-045f3e033b62" containerName="extract-content" Jan 23 06:26:23 crc kubenswrapper[4784]: E0123 06:26:23.325985 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf2bb81-ee53-475a-9648-987ec2d1adb2" containerName="extract-content" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.326094 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf2bb81-ee53-475a-9648-987ec2d1adb2" containerName="extract-content" Jan 23 06:26:23 crc kubenswrapper[4784]: E0123 06:26:23.326218 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2daccf7-5481-4092-a720-045f3e033b62" containerName="registry-server" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.326342 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2daccf7-5481-4092-a720-045f3e033b62" containerName="registry-server" Jan 23 06:26:23 crc kubenswrapper[4784]: E0123 06:26:23.326472 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a96ea8e-c45f-4799-886a-aef90c8b8e1a" containerName="extract-content" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.326582 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a96ea8e-c45f-4799-886a-aef90c8b8e1a" containerName="extract-content" Jan 23 06:26:23 crc kubenswrapper[4784]: E0123 06:26:23.326698 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c4e525-6765-4230-b129-3364819dfa47" containerName="extract-utilities" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.326853 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c4e525-6765-4230-b129-3364819dfa47" containerName="extract-utilities" Jan 23 06:26:23 crc kubenswrapper[4784]: E0123 06:26:23.327004 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf2bb81-ee53-475a-9648-987ec2d1adb2" containerName="registry-server" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.327118 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf2bb81-ee53-475a-9648-987ec2d1adb2" containerName="registry-server" Jan 23 06:26:23 crc kubenswrapper[4784]: E0123 06:26:23.327227 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2daccf7-5481-4092-a720-045f3e033b62" containerName="extract-utilities" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.327333 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2daccf7-5481-4092-a720-045f3e033b62" containerName="extract-utilities" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.327600 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2daccf7-5481-4092-a720-045f3e033b62" containerName="registry-server" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.327732 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a96ea8e-c45f-4799-886a-aef90c8b8e1a" containerName="registry-server" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.327906 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf2bb81-ee53-475a-9648-987ec2d1adb2" containerName="registry-server" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.328023 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c4e525-6765-4230-b129-3364819dfa47" containerName="registry-server" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.328203 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc93f303-432c-4487-a225-f0af2fa5bd49" containerName="marketplace-operator" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.328353 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc93f303-432c-4487-a225-f0af2fa5bd49" containerName="marketplace-operator" Jan 23 06:26:23 crc kubenswrapper[4784]: E0123 06:26:23.328642 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc93f303-432c-4487-a225-f0af2fa5bd49" containerName="marketplace-operator" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.328798 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc93f303-432c-4487-a225-f0af2fa5bd49" containerName="marketplace-operator" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.330159 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtrsz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.336746 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.339550 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1-utilities\") pod \"redhat-marketplace-dtrsz\" (UID: \"e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1\") " pod="openshift-marketplace/redhat-marketplace-dtrsz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.339677 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1-catalog-content\") pod \"redhat-marketplace-dtrsz\" (UID: \"e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1\") " pod="openshift-marketplace/redhat-marketplace-dtrsz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.339748 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs4nn\" (UniqueName: \"kubernetes.io/projected/e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1-kube-api-access-xs4nn\") pod \"redhat-marketplace-dtrsz\" (UID: \"e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1\") " pod="openshift-marketplace/redhat-marketplace-dtrsz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.350781 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtrsz"] Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.442029 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1-utilities\") pod \"redhat-marketplace-dtrsz\" (UID: \"e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1\") " pod="openshift-marketplace/redhat-marketplace-dtrsz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.442184 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1-catalog-content\") pod \"redhat-marketplace-dtrsz\" (UID: \"e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1\") " pod="openshift-marketplace/redhat-marketplace-dtrsz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.442271 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs4nn\" (UniqueName: \"kubernetes.io/projected/e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1-kube-api-access-xs4nn\") pod \"redhat-marketplace-dtrsz\" (UID: \"e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1\") " pod="openshift-marketplace/redhat-marketplace-dtrsz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.442705 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1-utilities\") pod \"redhat-marketplace-dtrsz\" (UID: \"e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1\") " pod="openshift-marketplace/redhat-marketplace-dtrsz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.442991 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1-catalog-content\") pod \"redhat-marketplace-dtrsz\" (UID: \"e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1\") " pod="openshift-marketplace/redhat-marketplace-dtrsz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.463005 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs4nn\" (UniqueName: \"kubernetes.io/projected/e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1-kube-api-access-xs4nn\") pod \"redhat-marketplace-dtrsz\" (UID: \"e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1\") " pod="openshift-marketplace/redhat-marketplace-dtrsz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.519039 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fws6t"] Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.520707 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fws6t" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.526619 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.529313 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fws6t"] Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.543371 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec609ce-97c4-4d5c-9621-2845609c71f1-utilities\") pod \"redhat-operators-fws6t\" (UID: \"9ec609ce-97c4-4d5c-9621-2845609c71f1\") " pod="openshift-marketplace/redhat-operators-fws6t" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.543430 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rj5x\" (UniqueName: \"kubernetes.io/projected/9ec609ce-97c4-4d5c-9621-2845609c71f1-kube-api-access-6rj5x\") pod \"redhat-operators-fws6t\" (UID: \"9ec609ce-97c4-4d5c-9621-2845609c71f1\") " pod="openshift-marketplace/redhat-operators-fws6t" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.543467 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec609ce-97c4-4d5c-9621-2845609c71f1-catalog-content\") pod \"redhat-operators-fws6t\" (UID: \"9ec609ce-97c4-4d5c-9621-2845609c71f1\") " pod="openshift-marketplace/redhat-operators-fws6t" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.603575 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.603657 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.622073 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.632734 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.644399 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec609ce-97c4-4d5c-9621-2845609c71f1-utilities\") pod \"redhat-operators-fws6t\" (UID: \"9ec609ce-97c4-4d5c-9621-2845609c71f1\") " pod="openshift-marketplace/redhat-operators-fws6t" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.644456 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rj5x\" (UniqueName: \"kubernetes.io/projected/9ec609ce-97c4-4d5c-9621-2845609c71f1-kube-api-access-6rj5x\") pod \"redhat-operators-fws6t\" (UID: \"9ec609ce-97c4-4d5c-9621-2845609c71f1\") " pod="openshift-marketplace/redhat-operators-fws6t" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.644492 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec609ce-97c4-4d5c-9621-2845609c71f1-catalog-content\") pod \"redhat-operators-fws6t\" (UID: \"9ec609ce-97c4-4d5c-9621-2845609c71f1\") " pod="openshift-marketplace/redhat-operators-fws6t" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.645817 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec609ce-97c4-4d5c-9621-2845609c71f1-utilities\") pod \"redhat-operators-fws6t\" (UID: \"9ec609ce-97c4-4d5c-9621-2845609c71f1\") " pod="openshift-marketplace/redhat-operators-fws6t" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.646426 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec609ce-97c4-4d5c-9621-2845609c71f1-catalog-content\") pod \"redhat-operators-fws6t\" (UID: \"9ec609ce-97c4-4d5c-9621-2845609c71f1\") " pod="openshift-marketplace/redhat-operators-fws6t" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.664262 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtrsz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.690435 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rj5x\" (UniqueName: \"kubernetes.io/projected/9ec609ce-97c4-4d5c-9621-2845609c71f1-kube-api-access-6rj5x\") pod \"redhat-operators-fws6t\" (UID: \"9ec609ce-97c4-4d5c-9621-2845609c71f1\") " pod="openshift-marketplace/redhat-operators-fws6t" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.718293 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qc9lz"] Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.722776 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.739680 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qc9lz"] Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.846232 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c87ee378-b6b8-4c35-a49a-42a09402ba7d-trusted-ca\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.846308 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c87ee378-b6b8-4c35-a49a-42a09402ba7d-bound-sa-token\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.846347 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.846367 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c87ee378-b6b8-4c35-a49a-42a09402ba7d-registry-certificates\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.846394 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npr5c\" (UniqueName: \"kubernetes.io/projected/c87ee378-b6b8-4c35-a49a-42a09402ba7d-kube-api-access-npr5c\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.846419 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c87ee378-b6b8-4c35-a49a-42a09402ba7d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.846624 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c87ee378-b6b8-4c35-a49a-42a09402ba7d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.846792 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c87ee378-b6b8-4c35-a49a-42a09402ba7d-registry-tls\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.849867 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fws6t" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.877167 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.947952 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c87ee378-b6b8-4c35-a49a-42a09402ba7d-bound-sa-token\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.948094 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c87ee378-b6b8-4c35-a49a-42a09402ba7d-registry-certificates\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.948137 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npr5c\" (UniqueName: \"kubernetes.io/projected/c87ee378-b6b8-4c35-a49a-42a09402ba7d-kube-api-access-npr5c\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.948171 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c87ee378-b6b8-4c35-a49a-42a09402ba7d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.948201 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c87ee378-b6b8-4c35-a49a-42a09402ba7d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.948226 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c87ee378-b6b8-4c35-a49a-42a09402ba7d-registry-tls\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.948848 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c87ee378-b6b8-4c35-a49a-42a09402ba7d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.948934 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c87ee378-b6b8-4c35-a49a-42a09402ba7d-trusted-ca\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.952767 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c87ee378-b6b8-4c35-a49a-42a09402ba7d-registry-certificates\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.953080 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c87ee378-b6b8-4c35-a49a-42a09402ba7d-trusted-ca\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.954519 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c87ee378-b6b8-4c35-a49a-42a09402ba7d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:23 crc kubenswrapper[4784]: I0123 06:26:23.956890 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c87ee378-b6b8-4c35-a49a-42a09402ba7d-registry-tls\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:24 crc kubenswrapper[4784]: I0123 06:26:24.474811 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c87ee378-b6b8-4c35-a49a-42a09402ba7d-bound-sa-token\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:24 crc kubenswrapper[4784]: I0123 06:26:24.474988 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npr5c\" (UniqueName: \"kubernetes.io/projected/c87ee378-b6b8-4c35-a49a-42a09402ba7d-kube-api-access-npr5c\") pod \"image-registry-66df7c8f76-qc9lz\" (UID: \"c87ee378-b6b8-4c35-a49a-42a09402ba7d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:24 crc kubenswrapper[4784]: I0123 06:26:24.650909 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:24 crc kubenswrapper[4784]: I0123 06:26:24.822613 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtrsz"] Jan 23 06:26:24 crc kubenswrapper[4784]: W0123 06:26:24.831646 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode58dcf77_8d95_4d5a_8fb3_ed8a463a18b1.slice/crio-a2e26ab6527b175ecdcace0b86e8c87becad6ef2ed4a57c0f4653d339e0a93d1 WatchSource:0}: Error finding container a2e26ab6527b175ecdcace0b86e8c87becad6ef2ed4a57c0f4653d339e0a93d1: Status 404 returned error can't find the container with id a2e26ab6527b175ecdcace0b86e8c87becad6ef2ed4a57c0f4653d339e0a93d1 Jan 23 06:26:24 crc kubenswrapper[4784]: I0123 06:26:24.911713 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fws6t"] Jan 23 06:26:24 crc kubenswrapper[4784]: W0123 06:26:24.932768 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ec609ce_97c4_4d5c_9621_2845609c71f1.slice/crio-f8b1f8887ebb4ec3934b6d019788ff087325ccec89b3070bea393a6b01b3d294 WatchSource:0}: Error finding container f8b1f8887ebb4ec3934b6d019788ff087325ccec89b3070bea393a6b01b3d294: Status 404 returned error can't find the container with id f8b1f8887ebb4ec3934b6d019788ff087325ccec89b3070bea393a6b01b3d294 Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.051912 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qc9lz"] Jan 23 06:26:25 crc kubenswrapper[4784]: W0123 06:26:25.089699 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc87ee378_b6b8_4c35_a49a_42a09402ba7d.slice/crio-03bce42d2c3d468a3e709f425e06f946b7ac71e3496bba361d18b53b27cf8e29 WatchSource:0}: Error finding container 03bce42d2c3d468a3e709f425e06f946b7ac71e3496bba361d18b53b27cf8e29: Status 404 returned error can't find the container with id 03bce42d2c3d468a3e709f425e06f946b7ac71e3496bba361d18b53b27cf8e29 Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.633898 4784 generic.go:334] "Generic (PLEG): container finished" podID="9ec609ce-97c4-4d5c-9621-2845609c71f1" containerID="a2d7bb42e7532eeed36989bbfe7a7a59da95f9c91e6937103131354121e75539" exitCode=0 Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.633967 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fws6t" event={"ID":"9ec609ce-97c4-4d5c-9621-2845609c71f1","Type":"ContainerDied","Data":"a2d7bb42e7532eeed36989bbfe7a7a59da95f9c91e6937103131354121e75539"} Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.634388 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fws6t" event={"ID":"9ec609ce-97c4-4d5c-9621-2845609c71f1","Type":"ContainerStarted","Data":"f8b1f8887ebb4ec3934b6d019788ff087325ccec89b3070bea393a6b01b3d294"} Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.639934 4784 generic.go:334] "Generic (PLEG): container finished" podID="e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1" containerID="cea7e99e98428e0156f369c08330068f9bc6cd714c398a642ba477ff4542a948" exitCode=0 Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.641030 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtrsz" event={"ID":"e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1","Type":"ContainerDied","Data":"cea7e99e98428e0156f369c08330068f9bc6cd714c398a642ba477ff4542a948"} Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.641062 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtrsz" event={"ID":"e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1","Type":"ContainerStarted","Data":"a2e26ab6527b175ecdcace0b86e8c87becad6ef2ed4a57c0f4653d339e0a93d1"} Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.656818 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" event={"ID":"c87ee378-b6b8-4c35-a49a-42a09402ba7d","Type":"ContainerStarted","Data":"5ac187712c5956060e625b27842fd8cb53ea1bfe4a2a6455abc11cec3976215a"} Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.656883 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.656905 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" event={"ID":"c87ee378-b6b8-4c35-a49a-42a09402ba7d","Type":"ContainerStarted","Data":"03bce42d2c3d468a3e709f425e06f946b7ac71e3496bba361d18b53b27cf8e29"} Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.712653 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" podStartSLOduration=2.712623985 podStartE2EDuration="2.712623985s" podCreationTimestamp="2026-01-23 06:26:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:26:25.699664325 +0000 UTC m=+388.932172319" watchObservedRunningTime="2026-01-23 06:26:25.712623985 +0000 UTC m=+388.945131959" Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.736646 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-26lkr"] Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.740023 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-26lkr" Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.743865 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.782845 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-26lkr"] Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.892209 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9fft\" (UniqueName: \"kubernetes.io/projected/a98abc92-b990-4885-af1c-221be1db3652-kube-api-access-p9fft\") pod \"community-operators-26lkr\" (UID: \"a98abc92-b990-4885-af1c-221be1db3652\") " pod="openshift-marketplace/community-operators-26lkr" Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.892266 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a98abc92-b990-4885-af1c-221be1db3652-utilities\") pod \"community-operators-26lkr\" (UID: \"a98abc92-b990-4885-af1c-221be1db3652\") " pod="openshift-marketplace/community-operators-26lkr" Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.892330 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a98abc92-b990-4885-af1c-221be1db3652-catalog-content\") pod \"community-operators-26lkr\" (UID: \"a98abc92-b990-4885-af1c-221be1db3652\") " pod="openshift-marketplace/community-operators-26lkr" Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.918374 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g2n5t"] Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.920825 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.926012 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g2n5t"] Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.928478 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.993653 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9fft\" (UniqueName: \"kubernetes.io/projected/a98abc92-b990-4885-af1c-221be1db3652-kube-api-access-p9fft\") pod \"community-operators-26lkr\" (UID: \"a98abc92-b990-4885-af1c-221be1db3652\") " pod="openshift-marketplace/community-operators-26lkr" Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.993707 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a98abc92-b990-4885-af1c-221be1db3652-utilities\") pod \"community-operators-26lkr\" (UID: \"a98abc92-b990-4885-af1c-221be1db3652\") " pod="openshift-marketplace/community-operators-26lkr" Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.993795 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a98abc92-b990-4885-af1c-221be1db3652-catalog-content\") pod \"community-operators-26lkr\" (UID: \"a98abc92-b990-4885-af1c-221be1db3652\") " pod="openshift-marketplace/community-operators-26lkr" Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.994676 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a98abc92-b990-4885-af1c-221be1db3652-catalog-content\") pod \"community-operators-26lkr\" (UID: \"a98abc92-b990-4885-af1c-221be1db3652\") " pod="openshift-marketplace/community-operators-26lkr" Jan 23 06:26:25 crc kubenswrapper[4784]: I0123 06:26:25.995430 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a98abc92-b990-4885-af1c-221be1db3652-utilities\") pod \"community-operators-26lkr\" (UID: \"a98abc92-b990-4885-af1c-221be1db3652\") " pod="openshift-marketplace/community-operators-26lkr" Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.015041 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9fft\" (UniqueName: \"kubernetes.io/projected/a98abc92-b990-4885-af1c-221be1db3652-kube-api-access-p9fft\") pod \"community-operators-26lkr\" (UID: \"a98abc92-b990-4885-af1c-221be1db3652\") " pod="openshift-marketplace/community-operators-26lkr" Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.091373 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-26lkr" Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.095673 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8272bc90-fdfc-49f1-90c1-cec4281786f0-catalog-content\") pod \"certified-operators-g2n5t\" (UID: \"8272bc90-fdfc-49f1-90c1-cec4281786f0\") " pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.095781 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8272bc90-fdfc-49f1-90c1-cec4281786f0-utilities\") pod \"certified-operators-g2n5t\" (UID: \"8272bc90-fdfc-49f1-90c1-cec4281786f0\") " pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.095886 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kffm8\" (UniqueName: \"kubernetes.io/projected/8272bc90-fdfc-49f1-90c1-cec4281786f0-kube-api-access-kffm8\") pod \"certified-operators-g2n5t\" (UID: \"8272bc90-fdfc-49f1-90c1-cec4281786f0\") " pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.199028 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kffm8\" (UniqueName: \"kubernetes.io/projected/8272bc90-fdfc-49f1-90c1-cec4281786f0-kube-api-access-kffm8\") pod \"certified-operators-g2n5t\" (UID: \"8272bc90-fdfc-49f1-90c1-cec4281786f0\") " pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.199129 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8272bc90-fdfc-49f1-90c1-cec4281786f0-catalog-content\") pod \"certified-operators-g2n5t\" (UID: \"8272bc90-fdfc-49f1-90c1-cec4281786f0\") " pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.199186 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8272bc90-fdfc-49f1-90c1-cec4281786f0-utilities\") pod \"certified-operators-g2n5t\" (UID: \"8272bc90-fdfc-49f1-90c1-cec4281786f0\") " pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.200992 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8272bc90-fdfc-49f1-90c1-cec4281786f0-utilities\") pod \"certified-operators-g2n5t\" (UID: \"8272bc90-fdfc-49f1-90c1-cec4281786f0\") " pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.201304 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8272bc90-fdfc-49f1-90c1-cec4281786f0-catalog-content\") pod \"certified-operators-g2n5t\" (UID: \"8272bc90-fdfc-49f1-90c1-cec4281786f0\") " pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.227175 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kffm8\" (UniqueName: \"kubernetes.io/projected/8272bc90-fdfc-49f1-90c1-cec4281786f0-kube-api-access-kffm8\") pod \"certified-operators-g2n5t\" (UID: \"8272bc90-fdfc-49f1-90c1-cec4281786f0\") " pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.238497 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.541320 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-26lkr"] Jan 23 06:26:26 crc kubenswrapper[4784]: W0123 06:26:26.552262 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda98abc92_b990_4885_af1c_221be1db3652.slice/crio-147c57d36d8f180bdff709b716f8dc1faf5f5c56d5ee3bdb57ea8e963259c610 WatchSource:0}: Error finding container 147c57d36d8f180bdff709b716f8dc1faf5f5c56d5ee3bdb57ea8e963259c610: Status 404 returned error can't find the container with id 147c57d36d8f180bdff709b716f8dc1faf5f5c56d5ee3bdb57ea8e963259c610 Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.674013 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fws6t" event={"ID":"9ec609ce-97c4-4d5c-9621-2845609c71f1","Type":"ContainerStarted","Data":"6a930181c62a1e9519707e0a3a823d60ea2a66549836f7daed41c8303755f534"} Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.677873 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtrsz" event={"ID":"e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1","Type":"ContainerStarted","Data":"d691da52b8fcb63f7ee7f942efe96253ad44b0b8bfe573734dc8739062f31eb5"} Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.681828 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26lkr" event={"ID":"a98abc92-b990-4885-af1c-221be1db3652","Type":"ContainerStarted","Data":"147c57d36d8f180bdff709b716f8dc1faf5f5c56d5ee3bdb57ea8e963259c610"} Jan 23 06:26:26 crc kubenswrapper[4784]: I0123 06:26:26.695225 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g2n5t"] Jan 23 06:26:27 crc kubenswrapper[4784]: I0123 06:26:27.693339 4784 generic.go:334] "Generic (PLEG): container finished" podID="8272bc90-fdfc-49f1-90c1-cec4281786f0" containerID="d57213ebbb2054c3d6b9dc44f89b9a91c0c3357cc268788876518f28f8fafed4" exitCode=0 Jan 23 06:26:27 crc kubenswrapper[4784]: I0123 06:26:27.693894 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2n5t" event={"ID":"8272bc90-fdfc-49f1-90c1-cec4281786f0","Type":"ContainerDied","Data":"d57213ebbb2054c3d6b9dc44f89b9a91c0c3357cc268788876518f28f8fafed4"} Jan 23 06:26:27 crc kubenswrapper[4784]: I0123 06:26:27.693931 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2n5t" event={"ID":"8272bc90-fdfc-49f1-90c1-cec4281786f0","Type":"ContainerStarted","Data":"1b85faf11c39a6aeaa1dff5f038345bf5245df0262fd4928da56600cb74c9496"} Jan 23 06:26:27 crc kubenswrapper[4784]: I0123 06:26:27.701678 4784 generic.go:334] "Generic (PLEG): container finished" podID="9ec609ce-97c4-4d5c-9621-2845609c71f1" containerID="6a930181c62a1e9519707e0a3a823d60ea2a66549836f7daed41c8303755f534" exitCode=0 Jan 23 06:26:27 crc kubenswrapper[4784]: I0123 06:26:27.701787 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fws6t" event={"ID":"9ec609ce-97c4-4d5c-9621-2845609c71f1","Type":"ContainerDied","Data":"6a930181c62a1e9519707e0a3a823d60ea2a66549836f7daed41c8303755f534"} Jan 23 06:26:27 crc kubenswrapper[4784]: I0123 06:26:27.706642 4784 generic.go:334] "Generic (PLEG): container finished" podID="e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1" containerID="d691da52b8fcb63f7ee7f942efe96253ad44b0b8bfe573734dc8739062f31eb5" exitCode=0 Jan 23 06:26:27 crc kubenswrapper[4784]: I0123 06:26:27.706704 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtrsz" event={"ID":"e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1","Type":"ContainerDied","Data":"d691da52b8fcb63f7ee7f942efe96253ad44b0b8bfe573734dc8739062f31eb5"} Jan 23 06:26:27 crc kubenswrapper[4784]: I0123 06:26:27.720692 4784 generic.go:334] "Generic (PLEG): container finished" podID="a98abc92-b990-4885-af1c-221be1db3652" containerID="0f21e117682fcb0636636599babd2fb36d06202b1a2378268e018b049a897238" exitCode=0 Jan 23 06:26:27 crc kubenswrapper[4784]: I0123 06:26:27.720777 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26lkr" event={"ID":"a98abc92-b990-4885-af1c-221be1db3652","Type":"ContainerDied","Data":"0f21e117682fcb0636636599babd2fb36d06202b1a2378268e018b049a897238"} Jan 23 06:26:28 crc kubenswrapper[4784]: I0123 06:26:28.730841 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26lkr" event={"ID":"a98abc92-b990-4885-af1c-221be1db3652","Type":"ContainerStarted","Data":"b548b7ac7072fae968893d59bb23de4ff664b2cc1486662895ea07cb6a55591f"} Jan 23 06:26:28 crc kubenswrapper[4784]: I0123 06:26:28.733093 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2n5t" event={"ID":"8272bc90-fdfc-49f1-90c1-cec4281786f0","Type":"ContainerStarted","Data":"fcb2ffa1264588ab8033c6d6d0601d2a9ab8a6a38174249f306cd4d1d50e3761"} Jan 23 06:26:28 crc kubenswrapper[4784]: I0123 06:26:28.735574 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fws6t" event={"ID":"9ec609ce-97c4-4d5c-9621-2845609c71f1","Type":"ContainerStarted","Data":"133a8b052b3dd451eb6751af670b5715bf36b777e7931488a028cd5675f74b95"} Jan 23 06:26:28 crc kubenswrapper[4784]: I0123 06:26:28.738677 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtrsz" event={"ID":"e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1","Type":"ContainerStarted","Data":"7cdf62e09b58fa17b9fbe0368397533a2edc1e69c7349e1b0befd5ad46c6a9b5"} Jan 23 06:26:28 crc kubenswrapper[4784]: I0123 06:26:28.782764 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dtrsz" podStartSLOduration=3.301484478 podStartE2EDuration="5.782723615s" podCreationTimestamp="2026-01-23 06:26:23 +0000 UTC" firstStartedPulling="2026-01-23 06:26:25.655457563 +0000 UTC m=+388.887965537" lastFinishedPulling="2026-01-23 06:26:28.1366967 +0000 UTC m=+391.369204674" observedRunningTime="2026-01-23 06:26:28.779607998 +0000 UTC m=+392.012115972" watchObservedRunningTime="2026-01-23 06:26:28.782723615 +0000 UTC m=+392.015231589" Jan 23 06:26:28 crc kubenswrapper[4784]: I0123 06:26:28.822977 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fws6t" podStartSLOduration=3.267770612 podStartE2EDuration="5.822941877s" podCreationTimestamp="2026-01-23 06:26:23 +0000 UTC" firstStartedPulling="2026-01-23 06:26:25.636434092 +0000 UTC m=+388.868942056" lastFinishedPulling="2026-01-23 06:26:28.191605347 +0000 UTC m=+391.424113321" observedRunningTime="2026-01-23 06:26:28.821115882 +0000 UTC m=+392.053623856" watchObservedRunningTime="2026-01-23 06:26:28.822941877 +0000 UTC m=+392.055449861" Jan 23 06:26:29 crc kubenswrapper[4784]: I0123 06:26:29.748554 4784 generic.go:334] "Generic (PLEG): container finished" podID="8272bc90-fdfc-49f1-90c1-cec4281786f0" containerID="fcb2ffa1264588ab8033c6d6d0601d2a9ab8a6a38174249f306cd4d1d50e3761" exitCode=0 Jan 23 06:26:29 crc kubenswrapper[4784]: I0123 06:26:29.748678 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2n5t" event={"ID":"8272bc90-fdfc-49f1-90c1-cec4281786f0","Type":"ContainerDied","Data":"fcb2ffa1264588ab8033c6d6d0601d2a9ab8a6a38174249f306cd4d1d50e3761"} Jan 23 06:26:29 crc kubenswrapper[4784]: I0123 06:26:29.752077 4784 generic.go:334] "Generic (PLEG): container finished" podID="a98abc92-b990-4885-af1c-221be1db3652" containerID="b548b7ac7072fae968893d59bb23de4ff664b2cc1486662895ea07cb6a55591f" exitCode=0 Jan 23 06:26:29 crc kubenswrapper[4784]: I0123 06:26:29.752133 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26lkr" event={"ID":"a98abc92-b990-4885-af1c-221be1db3652","Type":"ContainerDied","Data":"b548b7ac7072fae968893d59bb23de4ff664b2cc1486662895ea07cb6a55591f"} Jan 23 06:26:30 crc kubenswrapper[4784]: I0123 06:26:30.760798 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-26lkr" event={"ID":"a98abc92-b990-4885-af1c-221be1db3652","Type":"ContainerStarted","Data":"6b7fad9f2fb053790773adc3b271f949e7c5a57c56a5fb780665fd0269ec6851"} Jan 23 06:26:30 crc kubenswrapper[4784]: I0123 06:26:30.765784 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2n5t" event={"ID":"8272bc90-fdfc-49f1-90c1-cec4281786f0","Type":"ContainerStarted","Data":"971226495d30c886dfbbe057a2c7fac43494180819d698cdf705b669d20f4677"} Jan 23 06:26:30 crc kubenswrapper[4784]: I0123 06:26:30.788811 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-26lkr" podStartSLOduration=3.254675076 podStartE2EDuration="5.788785323s" podCreationTimestamp="2026-01-23 06:26:25 +0000 UTC" firstStartedPulling="2026-01-23 06:26:27.722882108 +0000 UTC m=+390.955390082" lastFinishedPulling="2026-01-23 06:26:30.256992345 +0000 UTC m=+393.489500329" observedRunningTime="2026-01-23 06:26:30.781429512 +0000 UTC m=+394.013937486" watchObservedRunningTime="2026-01-23 06:26:30.788785323 +0000 UTC m=+394.021293307" Jan 23 06:26:30 crc kubenswrapper[4784]: I0123 06:26:30.815623 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g2n5t" podStartSLOduration=3.335964228 podStartE2EDuration="5.815586474s" podCreationTimestamp="2026-01-23 06:26:25 +0000 UTC" firstStartedPulling="2026-01-23 06:26:27.697257795 +0000 UTC m=+390.929765769" lastFinishedPulling="2026-01-23 06:26:30.176880041 +0000 UTC m=+393.409388015" observedRunningTime="2026-01-23 06:26:30.810610172 +0000 UTC m=+394.043118166" watchObservedRunningTime="2026-01-23 06:26:30.815586474 +0000 UTC m=+394.048094448" Jan 23 06:26:33 crc kubenswrapper[4784]: I0123 06:26:33.665863 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dtrsz" Jan 23 06:26:33 crc kubenswrapper[4784]: I0123 06:26:33.666516 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dtrsz" Jan 23 06:26:33 crc kubenswrapper[4784]: I0123 06:26:33.753895 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dtrsz" Jan 23 06:26:33 crc kubenswrapper[4784]: I0123 06:26:33.828909 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dtrsz" Jan 23 06:26:33 crc kubenswrapper[4784]: I0123 06:26:33.850479 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fws6t" Jan 23 06:26:33 crc kubenswrapper[4784]: I0123 06:26:33.850911 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fws6t" Jan 23 06:26:34 crc kubenswrapper[4784]: I0123 06:26:34.900529 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fws6t" podUID="9ec609ce-97c4-4d5c-9621-2845609c71f1" containerName="registry-server" probeResult="failure" output=< Jan 23 06:26:34 crc kubenswrapper[4784]: timeout: failed to connect service ":50051" within 1s Jan 23 06:26:34 crc kubenswrapper[4784]: > Jan 23 06:26:36 crc kubenswrapper[4784]: I0123 06:26:36.092241 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-26lkr" Jan 23 06:26:36 crc kubenswrapper[4784]: I0123 06:26:36.092325 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-26lkr" Jan 23 06:26:36 crc kubenswrapper[4784]: I0123 06:26:36.154611 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-26lkr" Jan 23 06:26:36 crc kubenswrapper[4784]: I0123 06:26:36.239491 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:26:36 crc kubenswrapper[4784]: I0123 06:26:36.240180 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:26:36 crc kubenswrapper[4784]: I0123 06:26:36.287300 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:26:36 crc kubenswrapper[4784]: I0123 06:26:36.849320 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-26lkr" Jan 23 06:26:36 crc kubenswrapper[4784]: I0123 06:26:36.854451 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:26:43 crc kubenswrapper[4784]: I0123 06:26:43.902590 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fws6t" Jan 23 06:26:43 crc kubenswrapper[4784]: I0123 06:26:43.948145 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fws6t" Jan 23 06:26:44 crc kubenswrapper[4784]: I0123 06:26:44.658266 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" Jan 23 06:26:44 crc kubenswrapper[4784]: I0123 06:26:44.725718 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fzmbh"] Jan 23 06:26:53 crc kubenswrapper[4784]: I0123 06:26:53.603243 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:26:53 crc kubenswrapper[4784]: I0123 06:26:53.604158 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:26:53 crc kubenswrapper[4784]: I0123 06:26:53.604276 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:26:53 crc kubenswrapper[4784]: I0123 06:26:53.605499 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25adc041f328bcd1365d0e84326a3506984c28454ac67a405c2afd11863cc83e"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:26:53 crc kubenswrapper[4784]: I0123 06:26:53.605615 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://25adc041f328bcd1365d0e84326a3506984c28454ac67a405c2afd11863cc83e" gracePeriod=600 Jan 23 06:26:53 crc kubenswrapper[4784]: I0123 06:26:53.935159 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="25adc041f328bcd1365d0e84326a3506984c28454ac67a405c2afd11863cc83e" exitCode=0 Jan 23 06:26:53 crc kubenswrapper[4784]: I0123 06:26:53.935287 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"25adc041f328bcd1365d0e84326a3506984c28454ac67a405c2afd11863cc83e"} Jan 23 06:26:53 crc kubenswrapper[4784]: I0123 06:26:53.935790 4784 scope.go:117] "RemoveContainer" containerID="fdcf0f30aadea7da26b07d6f942108fb5f74d8ccf61a53616be437938f305b3b" Jan 23 06:26:54 crc kubenswrapper[4784]: I0123 06:26:54.946577 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"8bb3afbe52b02da92cf41fff533908180b367876974272e6d79c68e76c0b0d9e"} Jan 23 06:27:09 crc kubenswrapper[4784]: I0123 06:27:09.763249 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" podUID="039c07e3-0dbc-4dd7-9984-5125cc13c6ff" containerName="registry" containerID="cri-o://ea986b64007d6d2b394a5d1031f41b9b047e3ac574cd7c3920d61143e6433902" gracePeriod=30 Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.060875 4784 generic.go:334] "Generic (PLEG): container finished" podID="039c07e3-0dbc-4dd7-9984-5125cc13c6ff" containerID="ea986b64007d6d2b394a5d1031f41b9b047e3ac574cd7c3920d61143e6433902" exitCode=0 Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.060977 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" event={"ID":"039c07e3-0dbc-4dd7-9984-5125cc13c6ff","Type":"ContainerDied","Data":"ea986b64007d6d2b394a5d1031f41b9b047e3ac574cd7c3920d61143e6433902"} Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.185029 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.273137 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-installation-pull-secrets\") pod \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.273219 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdzdp\" (UniqueName: \"kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-kube-api-access-bdzdp\") pod \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.273423 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.273453 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-trusted-ca\") pod \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.273550 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-ca-trust-extracted\") pod \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.273587 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-registry-tls\") pod \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.273661 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-registry-certificates\") pod \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.274906 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-bound-sa-token\") pod \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\" (UID: \"039c07e3-0dbc-4dd7-9984-5125cc13c6ff\") " Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.274685 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "039c07e3-0dbc-4dd7-9984-5125cc13c6ff" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.274698 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "039c07e3-0dbc-4dd7-9984-5125cc13c6ff" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.275665 4784 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.275688 4784 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.280722 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "039c07e3-0dbc-4dd7-9984-5125cc13c6ff" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.280787 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "039c07e3-0dbc-4dd7-9984-5125cc13c6ff" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.281172 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "039c07e3-0dbc-4dd7-9984-5125cc13c6ff" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.282479 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-kube-api-access-bdzdp" (OuterVolumeSpecName: "kube-api-access-bdzdp") pod "039c07e3-0dbc-4dd7-9984-5125cc13c6ff" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff"). InnerVolumeSpecName "kube-api-access-bdzdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.283104 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "039c07e3-0dbc-4dd7-9984-5125cc13c6ff" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.299409 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "039c07e3-0dbc-4dd7-9984-5125cc13c6ff" (UID: "039c07e3-0dbc-4dd7-9984-5125cc13c6ff"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.377242 4784 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.377296 4784 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.377321 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdzdp\" (UniqueName: \"kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-kube-api-access-bdzdp\") on node \"crc\" DevicePath \"\"" Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.377335 4784 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 06:27:10 crc kubenswrapper[4784]: I0123 06:27:10.377348 4784 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/039c07e3-0dbc-4dd7-9984-5125cc13c6ff-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:27:11 crc kubenswrapper[4784]: I0123 06:27:11.079237 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" event={"ID":"039c07e3-0dbc-4dd7-9984-5125cc13c6ff","Type":"ContainerDied","Data":"76bcc8558b78e14ddddc986f42eb4c9e6c7578448a704e36981c29da2e1bd14e"} Jan 23 06:27:11 crc kubenswrapper[4784]: I0123 06:27:11.081582 4784 scope.go:117] "RemoveContainer" containerID="ea986b64007d6d2b394a5d1031f41b9b047e3ac574cd7c3920d61143e6433902" Jan 23 06:27:11 crc kubenswrapper[4784]: I0123 06:27:11.079886 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fzmbh" Jan 23 06:27:11 crc kubenswrapper[4784]: I0123 06:27:11.131060 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fzmbh"] Jan 23 06:27:11 crc kubenswrapper[4784]: I0123 06:27:11.135995 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fzmbh"] Jan 23 06:27:11 crc kubenswrapper[4784]: I0123 06:27:11.261872 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="039c07e3-0dbc-4dd7-9984-5125cc13c6ff" path="/var/lib/kubelet/pods/039c07e3-0dbc-4dd7-9984-5125cc13c6ff/volumes" Jan 23 06:28:53 crc kubenswrapper[4784]: I0123 06:28:53.603193 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:28:53 crc kubenswrapper[4784]: I0123 06:28:53.603970 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:29:23 crc kubenswrapper[4784]: I0123 06:29:23.603594 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:29:23 crc kubenswrapper[4784]: I0123 06:29:23.604365 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:29:53 crc kubenswrapper[4784]: I0123 06:29:53.602920 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:29:53 crc kubenswrapper[4784]: I0123 06:29:53.603809 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:29:53 crc kubenswrapper[4784]: I0123 06:29:53.603861 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:29:53 crc kubenswrapper[4784]: I0123 06:29:53.604380 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8bb3afbe52b02da92cf41fff533908180b367876974272e6d79c68e76c0b0d9e"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:29:53 crc kubenswrapper[4784]: I0123 06:29:53.604430 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://8bb3afbe52b02da92cf41fff533908180b367876974272e6d79c68e76c0b0d9e" gracePeriod=600 Jan 23 06:29:54 crc kubenswrapper[4784]: I0123 06:29:54.043677 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="8bb3afbe52b02da92cf41fff533908180b367876974272e6d79c68e76c0b0d9e" exitCode=0 Jan 23 06:29:54 crc kubenswrapper[4784]: I0123 06:29:54.043739 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"8bb3afbe52b02da92cf41fff533908180b367876974272e6d79c68e76c0b0d9e"} Jan 23 06:29:54 crc kubenswrapper[4784]: I0123 06:29:54.044193 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"d5f3a59b1e59c1bd355b45488149c87185e092896ddb07392d0e3d03fa4214d5"} Jan 23 06:29:54 crc kubenswrapper[4784]: I0123 06:29:54.044225 4784 scope.go:117] "RemoveContainer" containerID="25adc041f328bcd1365d0e84326a3506984c28454ac67a405c2afd11863cc83e" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.204726 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf"] Jan 23 06:30:00 crc kubenswrapper[4784]: E0123 06:30:00.206077 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="039c07e3-0dbc-4dd7-9984-5125cc13c6ff" containerName="registry" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.206105 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="039c07e3-0dbc-4dd7-9984-5125cc13c6ff" containerName="registry" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.206299 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="039c07e3-0dbc-4dd7-9984-5125cc13c6ff" containerName="registry" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.207185 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.212695 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.215316 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf"] Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.218179 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.302039 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdn2z\" (UniqueName: \"kubernetes.io/projected/8f3d5c55-d207-432d-8236-64168a40935b-kube-api-access-tdn2z\") pod \"collect-profiles-29485830-v84rf\" (UID: \"8f3d5c55-d207-432d-8236-64168a40935b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.302124 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f3d5c55-d207-432d-8236-64168a40935b-secret-volume\") pod \"collect-profiles-29485830-v84rf\" (UID: \"8f3d5c55-d207-432d-8236-64168a40935b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.302165 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f3d5c55-d207-432d-8236-64168a40935b-config-volume\") pod \"collect-profiles-29485830-v84rf\" (UID: \"8f3d5c55-d207-432d-8236-64168a40935b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.404003 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdn2z\" (UniqueName: \"kubernetes.io/projected/8f3d5c55-d207-432d-8236-64168a40935b-kube-api-access-tdn2z\") pod \"collect-profiles-29485830-v84rf\" (UID: \"8f3d5c55-d207-432d-8236-64168a40935b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.404077 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f3d5c55-d207-432d-8236-64168a40935b-secret-volume\") pod \"collect-profiles-29485830-v84rf\" (UID: \"8f3d5c55-d207-432d-8236-64168a40935b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.404109 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f3d5c55-d207-432d-8236-64168a40935b-config-volume\") pod \"collect-profiles-29485830-v84rf\" (UID: \"8f3d5c55-d207-432d-8236-64168a40935b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.405393 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f3d5c55-d207-432d-8236-64168a40935b-config-volume\") pod \"collect-profiles-29485830-v84rf\" (UID: \"8f3d5c55-d207-432d-8236-64168a40935b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.414924 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f3d5c55-d207-432d-8236-64168a40935b-secret-volume\") pod \"collect-profiles-29485830-v84rf\" (UID: \"8f3d5c55-d207-432d-8236-64168a40935b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.437181 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdn2z\" (UniqueName: \"kubernetes.io/projected/8f3d5c55-d207-432d-8236-64168a40935b-kube-api-access-tdn2z\") pod \"collect-profiles-29485830-v84rf\" (UID: \"8f3d5c55-d207-432d-8236-64168a40935b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.533233 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" Jan 23 06:30:00 crc kubenswrapper[4784]: I0123 06:30:00.835069 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf"] Jan 23 06:30:01 crc kubenswrapper[4784]: I0123 06:30:01.095807 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" event={"ID":"8f3d5c55-d207-432d-8236-64168a40935b","Type":"ContainerStarted","Data":"a397f5aa10b572aa7a5ff0fa37600ce4dfe68922557e1355aff08995b3569af7"} Jan 23 06:30:01 crc kubenswrapper[4784]: I0123 06:30:01.096287 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" event={"ID":"8f3d5c55-d207-432d-8236-64168a40935b","Type":"ContainerStarted","Data":"98b7a8b36b6e30a4adb3c3c4a3f895bd32dada4ad9cc0a83ca2ce1b4be205693"} Jan 23 06:30:01 crc kubenswrapper[4784]: I0123 06:30:01.117802 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" podStartSLOduration=1.117781921 podStartE2EDuration="1.117781921s" podCreationTimestamp="2026-01-23 06:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:30:01.11366903 +0000 UTC m=+604.346177014" watchObservedRunningTime="2026-01-23 06:30:01.117781921 +0000 UTC m=+604.350289895" Jan 23 06:30:02 crc kubenswrapper[4784]: I0123 06:30:02.106133 4784 generic.go:334] "Generic (PLEG): container finished" podID="8f3d5c55-d207-432d-8236-64168a40935b" containerID="a397f5aa10b572aa7a5ff0fa37600ce4dfe68922557e1355aff08995b3569af7" exitCode=0 Jan 23 06:30:02 crc kubenswrapper[4784]: I0123 06:30:02.106202 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" event={"ID":"8f3d5c55-d207-432d-8236-64168a40935b","Type":"ContainerDied","Data":"a397f5aa10b572aa7a5ff0fa37600ce4dfe68922557e1355aff08995b3569af7"} Jan 23 06:30:03 crc kubenswrapper[4784]: I0123 06:30:03.395852 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" Jan 23 06:30:03 crc kubenswrapper[4784]: I0123 06:30:03.552791 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f3d5c55-d207-432d-8236-64168a40935b-secret-volume\") pod \"8f3d5c55-d207-432d-8236-64168a40935b\" (UID: \"8f3d5c55-d207-432d-8236-64168a40935b\") " Jan 23 06:30:03 crc kubenswrapper[4784]: I0123 06:30:03.552948 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f3d5c55-d207-432d-8236-64168a40935b-config-volume\") pod \"8f3d5c55-d207-432d-8236-64168a40935b\" (UID: \"8f3d5c55-d207-432d-8236-64168a40935b\") " Jan 23 06:30:03 crc kubenswrapper[4784]: I0123 06:30:03.552990 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdn2z\" (UniqueName: \"kubernetes.io/projected/8f3d5c55-d207-432d-8236-64168a40935b-kube-api-access-tdn2z\") pod \"8f3d5c55-d207-432d-8236-64168a40935b\" (UID: \"8f3d5c55-d207-432d-8236-64168a40935b\") " Jan 23 06:30:03 crc kubenswrapper[4784]: I0123 06:30:03.555431 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3d5c55-d207-432d-8236-64168a40935b-config-volume" (OuterVolumeSpecName: "config-volume") pod "8f3d5c55-d207-432d-8236-64168a40935b" (UID: "8f3d5c55-d207-432d-8236-64168a40935b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:30:03 crc kubenswrapper[4784]: I0123 06:30:03.556129 4784 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f3d5c55-d207-432d-8236-64168a40935b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 06:30:03 crc kubenswrapper[4784]: I0123 06:30:03.571930 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f3d5c55-d207-432d-8236-64168a40935b-kube-api-access-tdn2z" (OuterVolumeSpecName: "kube-api-access-tdn2z") pod "8f3d5c55-d207-432d-8236-64168a40935b" (UID: "8f3d5c55-d207-432d-8236-64168a40935b"). InnerVolumeSpecName "kube-api-access-tdn2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:30:03 crc kubenswrapper[4784]: I0123 06:30:03.575876 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3d5c55-d207-432d-8236-64168a40935b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8f3d5c55-d207-432d-8236-64168a40935b" (UID: "8f3d5c55-d207-432d-8236-64168a40935b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:30:03 crc kubenswrapper[4784]: I0123 06:30:03.657243 4784 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f3d5c55-d207-432d-8236-64168a40935b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 06:30:03 crc kubenswrapper[4784]: I0123 06:30:03.657323 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdn2z\" (UniqueName: \"kubernetes.io/projected/8f3d5c55-d207-432d-8236-64168a40935b-kube-api-access-tdn2z\") on node \"crc\" DevicePath \"\"" Jan 23 06:30:04 crc kubenswrapper[4784]: I0123 06:30:04.123207 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" event={"ID":"8f3d5c55-d207-432d-8236-64168a40935b","Type":"ContainerDied","Data":"98b7a8b36b6e30a4adb3c3c4a3f895bd32dada4ad9cc0a83ca2ce1b4be205693"} Jan 23 06:30:04 crc kubenswrapper[4784]: I0123 06:30:04.123677 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98b7a8b36b6e30a4adb3c3c4a3f895bd32dada4ad9cc0a83ca2ce1b4be205693" Jan 23 06:30:04 crc kubenswrapper[4784]: I0123 06:30:04.123282 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.154637 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-tmn9t"] Jan 23 06:31:47 crc kubenswrapper[4784]: E0123 06:31:47.157415 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3d5c55-d207-432d-8236-64168a40935b" containerName="collect-profiles" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.157442 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3d5c55-d207-432d-8236-64168a40935b" containerName="collect-profiles" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.157572 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3d5c55-d207-432d-8236-64168a40935b" containerName="collect-profiles" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.159453 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tmn9t" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.161812 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.164635 4784 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-f9wpd" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.164845 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.168843 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-dtn5v"] Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.171073 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-dtn5v" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.176050 4784 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-dxxc4" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.186642 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-tmn9t"] Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.193051 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2snzc\" (UniqueName: \"kubernetes.io/projected/5528c51f-4fc7-4a52-9e8d-f38af10c6874-kube-api-access-2snzc\") pod \"cert-manager-cainjector-cf98fcc89-tmn9t\" (UID: \"5528c51f-4fc7-4a52-9e8d-f38af10c6874\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-tmn9t" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.193149 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cr85\" (UniqueName: \"kubernetes.io/projected/e6bccf31-7461-4999-9fdf-b6f2a17b50c4-kube-api-access-2cr85\") pod \"cert-manager-858654f9db-dtn5v\" (UID: \"e6bccf31-7461-4999-9fdf-b6f2a17b50c4\") " pod="cert-manager/cert-manager-858654f9db-dtn5v" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.193224 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-dtn5v"] Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.200503 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-q626v"] Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.201511 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-q626v" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.203548 4784 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wtshc" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.226362 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-q626v"] Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.296565 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2snzc\" (UniqueName: \"kubernetes.io/projected/5528c51f-4fc7-4a52-9e8d-f38af10c6874-kube-api-access-2snzc\") pod \"cert-manager-cainjector-cf98fcc89-tmn9t\" (UID: \"5528c51f-4fc7-4a52-9e8d-f38af10c6874\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-tmn9t" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.296703 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdj5b\" (UniqueName: \"kubernetes.io/projected/4c2ff224-fd79-4b3d-8bc7-95199aec7841-kube-api-access-kdj5b\") pod \"cert-manager-webhook-687f57d79b-q626v\" (UID: \"4c2ff224-fd79-4b3d-8bc7-95199aec7841\") " pod="cert-manager/cert-manager-webhook-687f57d79b-q626v" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.296772 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cr85\" (UniqueName: \"kubernetes.io/projected/e6bccf31-7461-4999-9fdf-b6f2a17b50c4-kube-api-access-2cr85\") pod \"cert-manager-858654f9db-dtn5v\" (UID: \"e6bccf31-7461-4999-9fdf-b6f2a17b50c4\") " pod="cert-manager/cert-manager-858654f9db-dtn5v" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.324700 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cr85\" (UniqueName: \"kubernetes.io/projected/e6bccf31-7461-4999-9fdf-b6f2a17b50c4-kube-api-access-2cr85\") pod \"cert-manager-858654f9db-dtn5v\" (UID: \"e6bccf31-7461-4999-9fdf-b6f2a17b50c4\") " pod="cert-manager/cert-manager-858654f9db-dtn5v" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.324788 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2snzc\" (UniqueName: \"kubernetes.io/projected/5528c51f-4fc7-4a52-9e8d-f38af10c6874-kube-api-access-2snzc\") pod \"cert-manager-cainjector-cf98fcc89-tmn9t\" (UID: \"5528c51f-4fc7-4a52-9e8d-f38af10c6874\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-tmn9t" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.398148 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdj5b\" (UniqueName: \"kubernetes.io/projected/4c2ff224-fd79-4b3d-8bc7-95199aec7841-kube-api-access-kdj5b\") pod \"cert-manager-webhook-687f57d79b-q626v\" (UID: \"4c2ff224-fd79-4b3d-8bc7-95199aec7841\") " pod="cert-manager/cert-manager-webhook-687f57d79b-q626v" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.416868 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdj5b\" (UniqueName: \"kubernetes.io/projected/4c2ff224-fd79-4b3d-8bc7-95199aec7841-kube-api-access-kdj5b\") pod \"cert-manager-webhook-687f57d79b-q626v\" (UID: \"4c2ff224-fd79-4b3d-8bc7-95199aec7841\") " pod="cert-manager/cert-manager-webhook-687f57d79b-q626v" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.483215 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tmn9t" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.494570 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-dtn5v" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.516661 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-q626v" Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.819009 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-q626v"] Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.829716 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.864595 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-q626v" event={"ID":"4c2ff224-fd79-4b3d-8bc7-95199aec7841","Type":"ContainerStarted","Data":"9b4f8a1d17a9789d4600e68362bdb6b087504585ef64fa2ac1cc7a0b02e5b130"} Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.941529 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-tmn9t"] Jan 23 06:31:47 crc kubenswrapper[4784]: W0123 06:31:47.946812 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5528c51f_4fc7_4a52_9e8d_f38af10c6874.slice/crio-03ba69a59f5f6ef1cfb55269abb55ba077d5dedbabe632fea2cba17ccf44b448 WatchSource:0}: Error finding container 03ba69a59f5f6ef1cfb55269abb55ba077d5dedbabe632fea2cba17ccf44b448: Status 404 returned error can't find the container with id 03ba69a59f5f6ef1cfb55269abb55ba077d5dedbabe632fea2cba17ccf44b448 Jan 23 06:31:47 crc kubenswrapper[4784]: I0123 06:31:47.985070 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-dtn5v"] Jan 23 06:31:47 crc kubenswrapper[4784]: W0123 06:31:47.990414 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6bccf31_7461_4999_9fdf_b6f2a17b50c4.slice/crio-b0ff90f02516f883b915570ff690eb772d0251e00d677496248321607865b74a WatchSource:0}: Error finding container b0ff90f02516f883b915570ff690eb772d0251e00d677496248321607865b74a: Status 404 returned error can't find the container with id b0ff90f02516f883b915570ff690eb772d0251e00d677496248321607865b74a Jan 23 06:31:48 crc kubenswrapper[4784]: I0123 06:31:48.872918 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-dtn5v" event={"ID":"e6bccf31-7461-4999-9fdf-b6f2a17b50c4","Type":"ContainerStarted","Data":"b0ff90f02516f883b915570ff690eb772d0251e00d677496248321607865b74a"} Jan 23 06:31:48 crc kubenswrapper[4784]: I0123 06:31:48.874385 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tmn9t" event={"ID":"5528c51f-4fc7-4a52-9e8d-f38af10c6874","Type":"ContainerStarted","Data":"03ba69a59f5f6ef1cfb55269abb55ba077d5dedbabe632fea2cba17ccf44b448"} Jan 23 06:31:50 crc kubenswrapper[4784]: I0123 06:31:50.890572 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-q626v" event={"ID":"4c2ff224-fd79-4b3d-8bc7-95199aec7841","Type":"ContainerStarted","Data":"6b768bfe4328b5b01e9e72e96d254f3b551f7b15bdca3ce06208e660371dc743"} Jan 23 06:31:50 crc kubenswrapper[4784]: I0123 06:31:50.890795 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-q626v" Jan 23 06:31:50 crc kubenswrapper[4784]: I0123 06:31:50.917155 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-q626v" podStartSLOduration=1.054625046 podStartE2EDuration="3.917112211s" podCreationTimestamp="2026-01-23 06:31:47 +0000 UTC" firstStartedPulling="2026-01-23 06:31:47.829472737 +0000 UTC m=+711.061980701" lastFinishedPulling="2026-01-23 06:31:50.691959882 +0000 UTC m=+713.924467866" observedRunningTime="2026-01-23 06:31:50.909602725 +0000 UTC m=+714.142110689" watchObservedRunningTime="2026-01-23 06:31:50.917112211 +0000 UTC m=+714.149620195" Jan 23 06:31:52 crc kubenswrapper[4784]: I0123 06:31:52.906196 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tmn9t" event={"ID":"5528c51f-4fc7-4a52-9e8d-f38af10c6874","Type":"ContainerStarted","Data":"c59644d1e8a3ddb317ce09879b76deda5fad5c246dfe141b4cfe2a3d8d5ef493"} Jan 23 06:31:52 crc kubenswrapper[4784]: I0123 06:31:52.908498 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-dtn5v" event={"ID":"e6bccf31-7461-4999-9fdf-b6f2a17b50c4","Type":"ContainerStarted","Data":"2eb94718d38f22384b3c329cbb81f45477ad0efe808913d581d3404f4062bb6b"} Jan 23 06:31:52 crc kubenswrapper[4784]: I0123 06:31:52.928390 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-tmn9t" podStartSLOduration=1.810824773 podStartE2EDuration="5.928357639s" podCreationTimestamp="2026-01-23 06:31:47 +0000 UTC" firstStartedPulling="2026-01-23 06:31:47.951606254 +0000 UTC m=+711.184114218" lastFinishedPulling="2026-01-23 06:31:52.06913907 +0000 UTC m=+715.301647084" observedRunningTime="2026-01-23 06:31:52.924832212 +0000 UTC m=+716.157340266" watchObservedRunningTime="2026-01-23 06:31:52.928357639 +0000 UTC m=+716.160865613" Jan 23 06:31:52 crc kubenswrapper[4784]: I0123 06:31:52.963714 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-dtn5v" podStartSLOduration=1.871354315 podStartE2EDuration="5.963685811s" podCreationTimestamp="2026-01-23 06:31:47 +0000 UTC" firstStartedPulling="2026-01-23 06:31:47.99409054 +0000 UTC m=+711.226598514" lastFinishedPulling="2026-01-23 06:31:52.086422006 +0000 UTC m=+715.318930010" observedRunningTime="2026-01-23 06:31:52.960272216 +0000 UTC m=+716.192780210" watchObservedRunningTime="2026-01-23 06:31:52.963685811 +0000 UTC m=+716.196193825" Jan 23 06:31:53 crc kubenswrapper[4784]: I0123 06:31:53.603787 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:31:53 crc kubenswrapper[4784]: I0123 06:31:53.603882 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.742089 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9652h"] Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.743012 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovn-controller" containerID="cri-o://b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5" gracePeriod=30 Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.743558 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="sbdb" containerID="cri-o://9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787" gracePeriod=30 Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.743616 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="nbdb" containerID="cri-o://435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a" gracePeriod=30 Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.743664 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="northd" containerID="cri-o://f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645" gracePeriod=30 Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.743717 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovn-acl-logging" containerID="cri-o://4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e" gracePeriod=30 Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.743804 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="kube-rbac-proxy-node" containerID="cri-o://90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d" gracePeriod=30 Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.743994 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e" gracePeriod=30 Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.798257 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" containerID="cri-o://5dbbbec10ed1c6d67d24d383d9860743c982df36bdab505bb77409c5c9a0aa5b" gracePeriod=30 Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.943227 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8cjm4_76b58650-2600-48a5-b11e-2ed4503cc6b2/kube-multus/2.log" Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.944020 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8cjm4_76b58650-2600-48a5-b11e-2ed4503cc6b2/kube-multus/1.log" Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.944078 4784 generic.go:334] "Generic (PLEG): container finished" podID="76b58650-2600-48a5-b11e-2ed4503cc6b2" containerID="8817814ff7fb7c0b8c339672e8721ca0f715332899fe5f1a0161e291413add1f" exitCode=2 Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.944124 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8cjm4" event={"ID":"76b58650-2600-48a5-b11e-2ed4503cc6b2","Type":"ContainerDied","Data":"8817814ff7fb7c0b8c339672e8721ca0f715332899fe5f1a0161e291413add1f"} Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.944205 4784 scope.go:117] "RemoveContainer" containerID="5069fe1f444d91f332095ee10707394c5ba532193fa0b03068a85ec8c6c80916" Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.945722 4784 scope.go:117] "RemoveContainer" containerID="8817814ff7fb7c0b8c339672e8721ca0f715332899fe5f1a0161e291413add1f" Jan 23 06:31:56 crc kubenswrapper[4784]: E0123 06:31:56.946161 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-8cjm4_openshift-multus(76b58650-2600-48a5-b11e-2ed4503cc6b2)\"" pod="openshift-multus/multus-8cjm4" podUID="76b58650-2600-48a5-b11e-2ed4503cc6b2" Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.949119 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovnkube-controller/3.log" Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.952515 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovn-acl-logging/0.log" Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.953103 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovn-controller/0.log" Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.953577 4784 generic.go:334] "Generic (PLEG): container finished" podID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerID="5dbbbec10ed1c6d67d24d383d9860743c982df36bdab505bb77409c5c9a0aa5b" exitCode=0 Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.953603 4784 generic.go:334] "Generic (PLEG): container finished" podID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerID="47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e" exitCode=0 Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.953613 4784 generic.go:334] "Generic (PLEG): container finished" podID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerID="90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d" exitCode=0 Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.953622 4784 generic.go:334] "Generic (PLEG): container finished" podID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerID="4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e" exitCode=143 Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.953633 4784 generic.go:334] "Generic (PLEG): container finished" podID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerID="b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5" exitCode=143 Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.953661 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerDied","Data":"5dbbbec10ed1c6d67d24d383d9860743c982df36bdab505bb77409c5c9a0aa5b"} Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.953701 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerDied","Data":"47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e"} Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.953732 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerDied","Data":"90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d"} Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.953763 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerDied","Data":"4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e"} Jan 23 06:31:56 crc kubenswrapper[4784]: I0123 06:31:56.953778 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerDied","Data":"b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5"} Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.054939 4784 scope.go:117] "RemoveContainer" containerID="960cf657a3bdda8e30814f9ecfcf97ee4afdd60cc7f99eaec95202691ce6b84f" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.104573 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovn-acl-logging/0.log" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.105253 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9652h_73ef0442-94bc-46f2-a551-15b59d1a5cf0/ovn-controller/0.log" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.105861 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.170037 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-75278"] Jan 23 06:31:57 crc kubenswrapper[4784]: E0123 06:31:57.170732 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovn-acl-logging" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.170837 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovn-acl-logging" Jan 23 06:31:57 crc kubenswrapper[4784]: E0123 06:31:57.170910 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="northd" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.170975 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="northd" Jan 23 06:31:57 crc kubenswrapper[4784]: E0123 06:31:57.171043 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="nbdb" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.171111 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="nbdb" Jan 23 06:31:57 crc kubenswrapper[4784]: E0123 06:31:57.171594 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.171653 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 06:31:57 crc kubenswrapper[4784]: E0123 06:31:57.171725 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.171797 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: E0123 06:31:57.171850 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="kube-rbac-proxy-node" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.171897 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="kube-rbac-proxy-node" Jan 23 06:31:57 crc kubenswrapper[4784]: E0123 06:31:57.171976 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovn-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.172028 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovn-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: E0123 06:31:57.172083 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.172133 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: E0123 06:31:57.172181 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="kubecfg-setup" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.172229 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="kubecfg-setup" Jan 23 06:31:57 crc kubenswrapper[4784]: E0123 06:31:57.172285 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="sbdb" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.173140 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="sbdb" Jan 23 06:31:57 crc kubenswrapper[4784]: E0123 06:31:57.173264 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.173342 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: E0123 06:31:57.173422 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.173491 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.173869 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.173982 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.174082 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="nbdb" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.174153 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.174224 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.174293 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="kube-rbac-proxy-node" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.174364 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="northd" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.174444 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.174519 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.174588 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovn-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.174660 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="sbdb" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.174789 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovn-acl-logging" Jan 23 06:31:57 crc kubenswrapper[4784]: E0123 06:31:57.174993 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.175011 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" containerName="ovnkube-controller" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.177044 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.256123 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.256613 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-cni-netd\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.256260 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.256673 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.256957 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.256935 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-openvswitch\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.257182 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-etc-openvswitch\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.257261 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.257380 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.257280 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-ovn\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.257562 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-cni-bin\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.257671 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovnkube-config\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.257823 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovn-node-metrics-cert\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.257948 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-slash\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.258035 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-kubelet\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.258123 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-systemd\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.258209 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-run-ovn-kubernetes\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.258309 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-var-lib-openvswitch\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.257698 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.258256 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.258395 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.258465 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-slash" (OuterVolumeSpecName: "host-slash") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.258399 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-systemd-units\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.258513 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-node-log\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.258542 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-env-overrides\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.258582 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovnkube-script-lib\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.258603 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-log-socket\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.258634 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5278\" (UniqueName: \"kubernetes.io/projected/73ef0442-94bc-46f2-a551-15b59d1a5cf0-kube-api-access-j5278\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.258650 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-run-netns\") pod \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\" (UID: \"73ef0442-94bc-46f2-a551-15b59d1a5cf0\") " Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.258852 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259097 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259138 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-node-log" (OuterVolumeSpecName: "node-log") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259139 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259202 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-log-socket" (OuterVolumeSpecName: "log-socket") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259269 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259652 4784 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-log-socket\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259686 4784 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259684 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259703 4784 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259720 4784 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259735 4784 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259773 4784 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259787 4784 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259801 4784 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259814 4784 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259785 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259828 4784 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-slash\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259843 4784 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259858 4784 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259877 4784 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259890 4784 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.259902 4784 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-node-log\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.266290 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ef0442-94bc-46f2-a551-15b59d1a5cf0-kube-api-access-j5278" (OuterVolumeSpecName: "kube-api-access-j5278") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "kube-api-access-j5278". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.266853 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.279296 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "73ef0442-94bc-46f2-a551-15b59d1a5cf0" (UID: "73ef0442-94bc-46f2-a551-15b59d1a5cf0"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.361330 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61844521-c4b0-4bd8-a552-19731d1221ee-ovnkube-config\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.361787 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-systemd-units\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.361857 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-slash\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.361928 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-run-openvswitch\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.361991 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-kubelet\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.362065 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-run-ovn\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.362156 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-etc-openvswitch\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.362233 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61844521-c4b0-4bd8-a552-19731d1221ee-ovnkube-script-lib\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.362405 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-run-netns\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.362511 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-node-log\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.362595 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61844521-c4b0-4bd8-a552-19731d1221ee-env-overrides\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.362672 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-run-ovn-kubernetes\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.362739 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61844521-c4b0-4bd8-a552-19731d1221ee-ovn-node-metrics-cert\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.362943 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-cni-netd\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.362998 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.363049 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-run-systemd\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.363284 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8m6j\" (UniqueName: \"kubernetes.io/projected/61844521-c4b0-4bd8-a552-19731d1221ee-kube-api-access-w8m6j\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.363455 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-var-lib-openvswitch\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.363512 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-log-socket\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.363549 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-cni-bin\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.363703 4784 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.363787 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5278\" (UniqueName: \"kubernetes.io/projected/73ef0442-94bc-46f2-a551-15b59d1a5cf0-kube-api-access-j5278\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.363825 4784 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/73ef0442-94bc-46f2-a551-15b59d1a5cf0-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.363850 4784 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/73ef0442-94bc-46f2-a551-15b59d1a5cf0-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.363873 4784 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/73ef0442-94bc-46f2-a551-15b59d1a5cf0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.464916 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-node-log\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465003 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61844521-c4b0-4bd8-a552-19731d1221ee-env-overrides\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465048 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-run-ovn-kubernetes\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465088 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61844521-c4b0-4bd8-a552-19731d1221ee-ovn-node-metrics-cert\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465136 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-cni-netd\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465170 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465203 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-run-ovn-kubernetes\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465208 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-run-systemd\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465257 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-run-systemd\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465313 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8m6j\" (UniqueName: \"kubernetes.io/projected/61844521-c4b0-4bd8-a552-19731d1221ee-kube-api-access-w8m6j\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465313 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465336 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-var-lib-openvswitch\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465348 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-cni-netd\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465397 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-log-socket\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465375 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-log-socket\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465443 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-var-lib-openvswitch\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465665 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-cni-bin\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465857 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61844521-c4b0-4bd8-a552-19731d1221ee-ovnkube-config\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465965 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-systemd-units\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466019 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-slash\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466111 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-run-openvswitch\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466174 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-kubelet\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466172 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-node-log\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466220 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-run-ovn\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466323 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-etc-openvswitch\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466407 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61844521-c4b0-4bd8-a552-19731d1221ee-ovnkube-script-lib\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466452 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-run-netns\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466495 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-kubelet\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.465854 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-cni-bin\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466506 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-slash\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466524 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-etc-openvswitch\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466530 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-run-openvswitch\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466453 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61844521-c4b0-4bd8-a552-19731d1221ee-env-overrides\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466624 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-host-run-netns\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466648 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-systemd-units\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466737 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61844521-c4b0-4bd8-a552-19731d1221ee-run-ovn\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.466805 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61844521-c4b0-4bd8-a552-19731d1221ee-ovnkube-config\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.467774 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61844521-c4b0-4bd8-a552-19731d1221ee-ovnkube-script-lib\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.474169 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61844521-c4b0-4bd8-a552-19731d1221ee-ovn-node-metrics-cert\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.487579 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8m6j\" (UniqueName: \"kubernetes.io/projected/61844521-c4b0-4bd8-a552-19731d1221ee-kube-api-access-w8m6j\") pod \"ovnkube-node-75278\" (UID: \"61844521-c4b0-4bd8-a552-19731d1221ee\") " pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.501905 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.522878 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-q626v" Jan 23 06:31:57 crc kubenswrapper[4784]: W0123 06:31:57.541303 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61844521_c4b0_4bd8_a552_19731d1221ee.slice/crio-56d419fe793b7bcbfcb790dd870a68d4c7fa0e24e184a0fbb8f7854bf951f10c WatchSource:0}: Error finding container 56d419fe793b7bcbfcb790dd870a68d4c7fa0e24e184a0fbb8f7854bf951f10c: Status 404 returned error can't find the container with id 56d419fe793b7bcbfcb790dd870a68d4c7fa0e24e184a0fbb8f7854bf951f10c Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.573828 4784 scope.go:117] "RemoveContainer" containerID="47779e085a164d8593a640bba55d92d27a1534b6838b5f38629957a47219698e" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.596168 4784 scope.go:117] "RemoveContainer" containerID="435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.619932 4784 scope.go:117] "RemoveContainer" containerID="4f07942f60f208e61a37691f702e39d199ccf4143324f26a4aa31b1f0c1c296e" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.640229 4784 scope.go:117] "RemoveContainer" containerID="f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.664599 4784 scope.go:117] "RemoveContainer" containerID="9c85d2fbb2f3a9387f9e3ea99082510b14f18fbd73df98e72500061b3d4cf67e" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.686940 4784 scope.go:117] "RemoveContainer" containerID="90ddef96ced3f868c8ce7d260d476ca4e3c535e1e5279eaa2eabfb35307f401d" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.721526 4784 scope.go:117] "RemoveContainer" containerID="b3f5ba8b44737222c4cb16871c5b810e5a04c5a02eec716224f8af5d087097c5" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.835577 4784 scope.go:117] "RemoveContainer" containerID="5dbbbec10ed1c6d67d24d383d9860743c982df36bdab505bb77409c5c9a0aa5b" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.855508 4784 scope.go:117] "RemoveContainer" containerID="d8e1d2927f5074cbfaba0b5f52c49134e5f561605e3a4ddc13a7fc7d7ec0f6bc" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.877627 4784 scope.go:117] "RemoveContainer" containerID="9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.961093 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8cjm4_76b58650-2600-48a5-b11e-2ed4503cc6b2/kube-multus/2.log" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.963695 4784 generic.go:334] "Generic (PLEG): container finished" podID="61844521-c4b0-4bd8-a552-19731d1221ee" containerID="0bfa7caad91481eb68c986a8cf4aa7e7ffca3dfb07e3bc6d685cdd8d135526c2" exitCode=0 Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.963856 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75278" event={"ID":"61844521-c4b0-4bd8-a552-19731d1221ee","Type":"ContainerDied","Data":"0bfa7caad91481eb68c986a8cf4aa7e7ffca3dfb07e3bc6d685cdd8d135526c2"} Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.963951 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75278" event={"ID":"61844521-c4b0-4bd8-a552-19731d1221ee","Type":"ContainerStarted","Data":"56d419fe793b7bcbfcb790dd870a68d4c7fa0e24e184a0fbb8f7854bf951f10c"} Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.963975 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerDied","Data":"9a8837e9d8d0edb427cb975f0bc6aa101c1c21af9500de9a8b5f410c4b962787"} Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.963994 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerDied","Data":"435fc6e9ac24beac2225a6c652ffbd01f78e44fb57dcf4ce1a9d39085abc081a"} Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.964008 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerDied","Data":"f127dd8187138ea27c2cacab9a3bd0892ac1cee7b7f881f0825947097991e645"} Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.963887 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" Jan 23 06:31:57 crc kubenswrapper[4784]: I0123 06:31:57.964028 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9652h" event={"ID":"73ef0442-94bc-46f2-a551-15b59d1a5cf0","Type":"ContainerDied","Data":"b274940f6ba2de4b146b58f369eb7cdc4db634d2d13d25729dcc30755c556f8e"} Jan 23 06:31:58 crc kubenswrapper[4784]: I0123 06:31:58.031284 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9652h"] Jan 23 06:31:58 crc kubenswrapper[4784]: I0123 06:31:58.036065 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9652h"] Jan 23 06:31:58 crc kubenswrapper[4784]: I0123 06:31:58.974064 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75278" event={"ID":"61844521-c4b0-4bd8-a552-19731d1221ee","Type":"ContainerStarted","Data":"bf0c60b6ad3454ba74cc0aa83e3da45fdb403a2c9495cb03b0b0552e2f7a86a7"} Jan 23 06:31:58 crc kubenswrapper[4784]: I0123 06:31:58.974484 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75278" event={"ID":"61844521-c4b0-4bd8-a552-19731d1221ee","Type":"ContainerStarted","Data":"80ed34f3111b4a830f6e6dac45c9957b3d72e8c816159d620862cfe62413b82c"} Jan 23 06:31:58 crc kubenswrapper[4784]: I0123 06:31:58.974496 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75278" event={"ID":"61844521-c4b0-4bd8-a552-19731d1221ee","Type":"ContainerStarted","Data":"42e753b38b28ce0965be96adafe2432ee494989be784a0c92ce638f315656a60"} Jan 23 06:31:58 crc kubenswrapper[4784]: I0123 06:31:58.974507 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75278" event={"ID":"61844521-c4b0-4bd8-a552-19731d1221ee","Type":"ContainerStarted","Data":"6541702971a507db67c55c1f42fb75d668e80c36831826085079c733ba73510e"} Jan 23 06:31:58 crc kubenswrapper[4784]: I0123 06:31:58.974517 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75278" event={"ID":"61844521-c4b0-4bd8-a552-19731d1221ee","Type":"ContainerStarted","Data":"232ae9899c4a569d681af5d98049432cf78e27791c2dee79a107ad322372aa8f"} Jan 23 06:31:58 crc kubenswrapper[4784]: I0123 06:31:58.974526 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75278" event={"ID":"61844521-c4b0-4bd8-a552-19731d1221ee","Type":"ContainerStarted","Data":"9f48702e92d217c2cefb383968a092e42b99c41461902cc3a637d961a69e41d9"} Jan 23 06:31:59 crc kubenswrapper[4784]: I0123 06:31:59.265745 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73ef0442-94bc-46f2-a551-15b59d1a5cf0" path="/var/lib/kubelet/pods/73ef0442-94bc-46f2-a551-15b59d1a5cf0/volumes" Jan 23 06:32:02 crc kubenswrapper[4784]: I0123 06:32:02.024872 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75278" event={"ID":"61844521-c4b0-4bd8-a552-19731d1221ee","Type":"ContainerStarted","Data":"eb3b27233ee1628f5d26130fef25f9142e8c50d5fef6ea862fa34f9c9fccae2f"} Jan 23 06:32:04 crc kubenswrapper[4784]: I0123 06:32:04.042739 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75278" event={"ID":"61844521-c4b0-4bd8-a552-19731d1221ee","Type":"ContainerStarted","Data":"47b6b29f2099b228b180cbd07e5bb147d40ceef3cbfd32ba1a52ff44869ed1d1"} Jan 23 06:32:04 crc kubenswrapper[4784]: I0123 06:32:04.043526 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:32:04 crc kubenswrapper[4784]: I0123 06:32:04.043550 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:32:04 crc kubenswrapper[4784]: I0123 06:32:04.043565 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:32:04 crc kubenswrapper[4784]: I0123 06:32:04.076215 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-75278" podStartSLOduration=7.076183406 podStartE2EDuration="7.076183406s" podCreationTimestamp="2026-01-23 06:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:32:04.070413364 +0000 UTC m=+727.302921348" watchObservedRunningTime="2026-01-23 06:32:04.076183406 +0000 UTC m=+727.308691390" Jan 23 06:32:04 crc kubenswrapper[4784]: I0123 06:32:04.084045 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:32:04 crc kubenswrapper[4784]: I0123 06:32:04.087241 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:32:10 crc kubenswrapper[4784]: I0123 06:32:10.254521 4784 scope.go:117] "RemoveContainer" containerID="8817814ff7fb7c0b8c339672e8721ca0f715332899fe5f1a0161e291413add1f" Jan 23 06:32:11 crc kubenswrapper[4784]: I0123 06:32:11.100366 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8cjm4_76b58650-2600-48a5-b11e-2ed4503cc6b2/kube-multus/2.log" Jan 23 06:32:11 crc kubenswrapper[4784]: I0123 06:32:11.100867 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8cjm4" event={"ID":"76b58650-2600-48a5-b11e-2ed4503cc6b2","Type":"ContainerStarted","Data":"26e33d5d969471cf87f5598153eefe43180142cf39ac8c0c3812def3542dd53f"} Jan 23 06:32:23 crc kubenswrapper[4784]: I0123 06:32:23.603655 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:32:23 crc kubenswrapper[4784]: I0123 06:32:23.604521 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:32:27 crc kubenswrapper[4784]: I0123 06:32:27.532301 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-75278" Jan 23 06:32:29 crc kubenswrapper[4784]: I0123 06:32:29.031412 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp"] Jan 23 06:32:29 crc kubenswrapper[4784]: I0123 06:32:29.034789 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" Jan 23 06:32:29 crc kubenswrapper[4784]: I0123 06:32:29.037310 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 06:32:29 crc kubenswrapper[4784]: I0123 06:32:29.043115 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp"] Jan 23 06:32:29 crc kubenswrapper[4784]: I0123 06:32:29.112083 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7hgq\" (UniqueName: \"kubernetes.io/projected/0bfddef6-60e2-416e-b320-20567c696fc4-kube-api-access-s7hgq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp\" (UID: \"0bfddef6-60e2-416e-b320-20567c696fc4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" Jan 23 06:32:29 crc kubenswrapper[4784]: I0123 06:32:29.112221 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0bfddef6-60e2-416e-b320-20567c696fc4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp\" (UID: \"0bfddef6-60e2-416e-b320-20567c696fc4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" Jan 23 06:32:29 crc kubenswrapper[4784]: I0123 06:32:29.112278 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0bfddef6-60e2-416e-b320-20567c696fc4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp\" (UID: \"0bfddef6-60e2-416e-b320-20567c696fc4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" Jan 23 06:32:29 crc kubenswrapper[4784]: I0123 06:32:29.213377 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7hgq\" (UniqueName: \"kubernetes.io/projected/0bfddef6-60e2-416e-b320-20567c696fc4-kube-api-access-s7hgq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp\" (UID: \"0bfddef6-60e2-416e-b320-20567c696fc4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" Jan 23 06:32:29 crc kubenswrapper[4784]: I0123 06:32:29.213468 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0bfddef6-60e2-416e-b320-20567c696fc4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp\" (UID: \"0bfddef6-60e2-416e-b320-20567c696fc4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" Jan 23 06:32:29 crc kubenswrapper[4784]: I0123 06:32:29.213512 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0bfddef6-60e2-416e-b320-20567c696fc4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp\" (UID: \"0bfddef6-60e2-416e-b320-20567c696fc4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" Jan 23 06:32:29 crc kubenswrapper[4784]: I0123 06:32:29.214226 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0bfddef6-60e2-416e-b320-20567c696fc4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp\" (UID: \"0bfddef6-60e2-416e-b320-20567c696fc4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" Jan 23 06:32:29 crc kubenswrapper[4784]: I0123 06:32:29.214286 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0bfddef6-60e2-416e-b320-20567c696fc4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp\" (UID: \"0bfddef6-60e2-416e-b320-20567c696fc4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" Jan 23 06:32:29 crc kubenswrapper[4784]: I0123 06:32:29.253555 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7hgq\" (UniqueName: \"kubernetes.io/projected/0bfddef6-60e2-416e-b320-20567c696fc4-kube-api-access-s7hgq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp\" (UID: \"0bfddef6-60e2-416e-b320-20567c696fc4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" Jan 23 06:32:29 crc kubenswrapper[4784]: I0123 06:32:29.356485 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" Jan 23 06:32:29 crc kubenswrapper[4784]: I0123 06:32:29.594628 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp"] Jan 23 06:32:29 crc kubenswrapper[4784]: W0123 06:32:29.601148 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bfddef6_60e2_416e_b320_20567c696fc4.slice/crio-17a231fdace6db8311846d835fb8e86a34be73313dd5420792163c61082edccf WatchSource:0}: Error finding container 17a231fdace6db8311846d835fb8e86a34be73313dd5420792163c61082edccf: Status 404 returned error can't find the container with id 17a231fdace6db8311846d835fb8e86a34be73313dd5420792163c61082edccf Jan 23 06:32:30 crc kubenswrapper[4784]: I0123 06:32:30.239030 4784 generic.go:334] "Generic (PLEG): container finished" podID="0bfddef6-60e2-416e-b320-20567c696fc4" containerID="6d02fc954bfc87aadb879dbf4dab899fa44c4bf7eace4cccf7bbf619e2462a67" exitCode=0 Jan 23 06:32:30 crc kubenswrapper[4784]: I0123 06:32:30.239112 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" event={"ID":"0bfddef6-60e2-416e-b320-20567c696fc4","Type":"ContainerDied","Data":"6d02fc954bfc87aadb879dbf4dab899fa44c4bf7eace4cccf7bbf619e2462a67"} Jan 23 06:32:30 crc kubenswrapper[4784]: I0123 06:32:30.241126 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" event={"ID":"0bfddef6-60e2-416e-b320-20567c696fc4","Type":"ContainerStarted","Data":"17a231fdace6db8311846d835fb8e86a34be73313dd5420792163c61082edccf"} Jan 23 06:32:32 crc kubenswrapper[4784]: I0123 06:32:32.263064 4784 generic.go:334] "Generic (PLEG): container finished" podID="0bfddef6-60e2-416e-b320-20567c696fc4" containerID="7aba3cc7157eaf99042496b226b4cb7f36b5a5a89ab83e8ccdf5cdaaf76dc405" exitCode=0 Jan 23 06:32:32 crc kubenswrapper[4784]: I0123 06:32:32.263178 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" event={"ID":"0bfddef6-60e2-416e-b320-20567c696fc4","Type":"ContainerDied","Data":"7aba3cc7157eaf99042496b226b4cb7f36b5a5a89ab83e8ccdf5cdaaf76dc405"} Jan 23 06:32:33 crc kubenswrapper[4784]: I0123 06:32:33.280382 4784 generic.go:334] "Generic (PLEG): container finished" podID="0bfddef6-60e2-416e-b320-20567c696fc4" containerID="2ad1489769f7a095ed1706c02c3d6e99a37030dbd8379005a81159d1ce7d9164" exitCode=0 Jan 23 06:32:33 crc kubenswrapper[4784]: I0123 06:32:33.280458 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" event={"ID":"0bfddef6-60e2-416e-b320-20567c696fc4","Type":"ContainerDied","Data":"2ad1489769f7a095ed1706c02c3d6e99a37030dbd8379005a81159d1ce7d9164"} Jan 23 06:32:34 crc kubenswrapper[4784]: I0123 06:32:34.593909 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" Jan 23 06:32:34 crc kubenswrapper[4784]: I0123 06:32:34.715411 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0bfddef6-60e2-416e-b320-20567c696fc4-util\") pod \"0bfddef6-60e2-416e-b320-20567c696fc4\" (UID: \"0bfddef6-60e2-416e-b320-20567c696fc4\") " Jan 23 06:32:34 crc kubenswrapper[4784]: I0123 06:32:34.715714 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0bfddef6-60e2-416e-b320-20567c696fc4-bundle\") pod \"0bfddef6-60e2-416e-b320-20567c696fc4\" (UID: \"0bfddef6-60e2-416e-b320-20567c696fc4\") " Jan 23 06:32:34 crc kubenswrapper[4784]: I0123 06:32:34.715864 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7hgq\" (UniqueName: \"kubernetes.io/projected/0bfddef6-60e2-416e-b320-20567c696fc4-kube-api-access-s7hgq\") pod \"0bfddef6-60e2-416e-b320-20567c696fc4\" (UID: \"0bfddef6-60e2-416e-b320-20567c696fc4\") " Jan 23 06:32:34 crc kubenswrapper[4784]: I0123 06:32:34.720660 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bfddef6-60e2-416e-b320-20567c696fc4-bundle" (OuterVolumeSpecName: "bundle") pod "0bfddef6-60e2-416e-b320-20567c696fc4" (UID: "0bfddef6-60e2-416e-b320-20567c696fc4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:32:34 crc kubenswrapper[4784]: I0123 06:32:34.724571 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bfddef6-60e2-416e-b320-20567c696fc4-kube-api-access-s7hgq" (OuterVolumeSpecName: "kube-api-access-s7hgq") pod "0bfddef6-60e2-416e-b320-20567c696fc4" (UID: "0bfddef6-60e2-416e-b320-20567c696fc4"). InnerVolumeSpecName "kube-api-access-s7hgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:32:34 crc kubenswrapper[4784]: I0123 06:32:34.741307 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bfddef6-60e2-416e-b320-20567c696fc4-util" (OuterVolumeSpecName: "util") pod "0bfddef6-60e2-416e-b320-20567c696fc4" (UID: "0bfddef6-60e2-416e-b320-20567c696fc4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:32:34 crc kubenswrapper[4784]: I0123 06:32:34.821382 4784 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0bfddef6-60e2-416e-b320-20567c696fc4-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:32:34 crc kubenswrapper[4784]: I0123 06:32:34.821578 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7hgq\" (UniqueName: \"kubernetes.io/projected/0bfddef6-60e2-416e-b320-20567c696fc4-kube-api-access-s7hgq\") on node \"crc\" DevicePath \"\"" Jan 23 06:32:34 crc kubenswrapper[4784]: I0123 06:32:34.821616 4784 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0bfddef6-60e2-416e-b320-20567c696fc4-util\") on node \"crc\" DevicePath \"\"" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.175182 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q9rrj"] Jan 23 06:32:35 crc kubenswrapper[4784]: E0123 06:32:35.175458 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bfddef6-60e2-416e-b320-20567c696fc4" containerName="pull" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.175473 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bfddef6-60e2-416e-b320-20567c696fc4" containerName="pull" Jan 23 06:32:35 crc kubenswrapper[4784]: E0123 06:32:35.175490 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bfddef6-60e2-416e-b320-20567c696fc4" containerName="util" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.175499 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bfddef6-60e2-416e-b320-20567c696fc4" containerName="util" Jan 23 06:32:35 crc kubenswrapper[4784]: E0123 06:32:35.175509 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bfddef6-60e2-416e-b320-20567c696fc4" containerName="extract" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.175515 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bfddef6-60e2-416e-b320-20567c696fc4" containerName="extract" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.175622 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bfddef6-60e2-416e-b320-20567c696fc4" containerName="extract" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.176439 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.195602 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q9rrj"] Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.227605 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26deebeb-d3dd-4553-b6c4-ff2275bcc702-catalog-content\") pod \"redhat-operators-q9rrj\" (UID: \"26deebeb-d3dd-4553-b6c4-ff2275bcc702\") " pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.227686 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbbsn\" (UniqueName: \"kubernetes.io/projected/26deebeb-d3dd-4553-b6c4-ff2275bcc702-kube-api-access-nbbsn\") pod \"redhat-operators-q9rrj\" (UID: \"26deebeb-d3dd-4553-b6c4-ff2275bcc702\") " pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.227723 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26deebeb-d3dd-4553-b6c4-ff2275bcc702-utilities\") pod \"redhat-operators-q9rrj\" (UID: \"26deebeb-d3dd-4553-b6c4-ff2275bcc702\") " pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.300058 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" event={"ID":"0bfddef6-60e2-416e-b320-20567c696fc4","Type":"ContainerDied","Data":"17a231fdace6db8311846d835fb8e86a34be73313dd5420792163c61082edccf"} Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.300099 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17a231fdace6db8311846d835fb8e86a34be73313dd5420792163c61082edccf" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.300232 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.328876 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26deebeb-d3dd-4553-b6c4-ff2275bcc702-catalog-content\") pod \"redhat-operators-q9rrj\" (UID: \"26deebeb-d3dd-4553-b6c4-ff2275bcc702\") " pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.328967 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbbsn\" (UniqueName: \"kubernetes.io/projected/26deebeb-d3dd-4553-b6c4-ff2275bcc702-kube-api-access-nbbsn\") pod \"redhat-operators-q9rrj\" (UID: \"26deebeb-d3dd-4553-b6c4-ff2275bcc702\") " pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.329007 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26deebeb-d3dd-4553-b6c4-ff2275bcc702-utilities\") pod \"redhat-operators-q9rrj\" (UID: \"26deebeb-d3dd-4553-b6c4-ff2275bcc702\") " pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.329482 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26deebeb-d3dd-4553-b6c4-ff2275bcc702-utilities\") pod \"redhat-operators-q9rrj\" (UID: \"26deebeb-d3dd-4553-b6c4-ff2275bcc702\") " pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.331328 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26deebeb-d3dd-4553-b6c4-ff2275bcc702-catalog-content\") pod \"redhat-operators-q9rrj\" (UID: \"26deebeb-d3dd-4553-b6c4-ff2275bcc702\") " pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.355798 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbbsn\" (UniqueName: \"kubernetes.io/projected/26deebeb-d3dd-4553-b6c4-ff2275bcc702-kube-api-access-nbbsn\") pod \"redhat-operators-q9rrj\" (UID: \"26deebeb-d3dd-4553-b6c4-ff2275bcc702\") " pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.510425 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.767502 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q9rrj"] Jan 23 06:32:35 crc kubenswrapper[4784]: I0123 06:32:35.895716 4784 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 06:32:36 crc kubenswrapper[4784]: I0123 06:32:36.308197 4784 generic.go:334] "Generic (PLEG): container finished" podID="26deebeb-d3dd-4553-b6c4-ff2275bcc702" containerID="47cde0de3d8e90acab892f4e9e2baa93f1ee0b9c2af1bce5668312e57bc515cc" exitCode=0 Jan 23 06:32:36 crc kubenswrapper[4784]: I0123 06:32:36.308257 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9rrj" event={"ID":"26deebeb-d3dd-4553-b6c4-ff2275bcc702","Type":"ContainerDied","Data":"47cde0de3d8e90acab892f4e9e2baa93f1ee0b9c2af1bce5668312e57bc515cc"} Jan 23 06:32:36 crc kubenswrapper[4784]: I0123 06:32:36.308297 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9rrj" event={"ID":"26deebeb-d3dd-4553-b6c4-ff2275bcc702","Type":"ContainerStarted","Data":"d83cebba9355a27d708df80833ab9d54a9a97e0c01e86ca772880775994cdf44"} Jan 23 06:32:37 crc kubenswrapper[4784]: I0123 06:32:37.317045 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9rrj" event={"ID":"26deebeb-d3dd-4553-b6c4-ff2275bcc702","Type":"ContainerStarted","Data":"043356f9ec4fd23c1adb540ebd795af08571f23da5ee942a3f40920bd439eeff"} Jan 23 06:32:37 crc kubenswrapper[4784]: E0123 06:32:37.897342 4784 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26deebeb_d3dd_4553_b6c4_ff2275bcc702.slice/crio-conmon-043356f9ec4fd23c1adb540ebd795af08571f23da5ee942a3f40920bd439eeff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26deebeb_d3dd_4553_b6c4_ff2275bcc702.slice/crio-043356f9ec4fd23c1adb540ebd795af08571f23da5ee942a3f40920bd439eeff.scope\": RecentStats: unable to find data in memory cache]" Jan 23 06:32:38 crc kubenswrapper[4784]: I0123 06:32:38.324918 4784 generic.go:334] "Generic (PLEG): container finished" podID="26deebeb-d3dd-4553-b6c4-ff2275bcc702" containerID="043356f9ec4fd23c1adb540ebd795af08571f23da5ee942a3f40920bd439eeff" exitCode=0 Jan 23 06:32:38 crc kubenswrapper[4784]: I0123 06:32:38.324960 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9rrj" event={"ID":"26deebeb-d3dd-4553-b6c4-ff2275bcc702","Type":"ContainerDied","Data":"043356f9ec4fd23c1adb540ebd795af08571f23da5ee942a3f40920bd439eeff"} Jan 23 06:32:39 crc kubenswrapper[4784]: I0123 06:32:39.333721 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9rrj" event={"ID":"26deebeb-d3dd-4553-b6c4-ff2275bcc702","Type":"ContainerStarted","Data":"ef8f715f886792a346616c2a50e371653ce9d3310efaecead63306bb63609d9f"} Jan 23 06:32:39 crc kubenswrapper[4784]: I0123 06:32:39.357784 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q9rrj" podStartSLOduration=1.939323627 podStartE2EDuration="4.357744726s" podCreationTimestamp="2026-01-23 06:32:35 +0000 UTC" firstStartedPulling="2026-01-23 06:32:36.31005535 +0000 UTC m=+759.542563314" lastFinishedPulling="2026-01-23 06:32:38.728476439 +0000 UTC m=+761.960984413" observedRunningTime="2026-01-23 06:32:39.353848779 +0000 UTC m=+762.586356753" watchObservedRunningTime="2026-01-23 06:32:39.357744726 +0000 UTC m=+762.590252700" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.027475 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-xcvs9"] Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.043594 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xcvs9" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.051464 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.051945 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.052046 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-vxd2f" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.084517 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-xcvs9"] Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.169794 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf"] Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.170625 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.176970 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-cpm9z" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.177167 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.178331 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg9rp\" (UniqueName: \"kubernetes.io/projected/19b07fe7-1025-43fc-a462-4aaef0fe9833-kube-api-access-cg9rp\") pod \"obo-prometheus-operator-68bc856cb9-xcvs9\" (UID: \"19b07fe7-1025-43fc-a462-4aaef0fe9833\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xcvs9" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.195790 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc"] Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.196670 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.214523 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf"] Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.237293 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc"] Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.281528 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6031f0e9-6391-4a28-8473-a458ec564ad6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc\" (UID: \"6031f0e9-6391-4a28-8473-a458ec564ad6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.281595 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6031f0e9-6391-4a28-8473-a458ec564ad6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc\" (UID: \"6031f0e9-6391-4a28-8473-a458ec564ad6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.281622 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7cb0c63a-cabc-45c2-84b7-54ae314e802d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf\" (UID: \"7cb0c63a-cabc-45c2-84b7-54ae314e802d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.281645 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7cb0c63a-cabc-45c2-84b7-54ae314e802d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf\" (UID: \"7cb0c63a-cabc-45c2-84b7-54ae314e802d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.281745 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg9rp\" (UniqueName: \"kubernetes.io/projected/19b07fe7-1025-43fc-a462-4aaef0fe9833-kube-api-access-cg9rp\") pod \"obo-prometheus-operator-68bc856cb9-xcvs9\" (UID: \"19b07fe7-1025-43fc-a462-4aaef0fe9833\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xcvs9" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.305329 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg9rp\" (UniqueName: \"kubernetes.io/projected/19b07fe7-1025-43fc-a462-4aaef0fe9833-kube-api-access-cg9rp\") pod \"obo-prometheus-operator-68bc856cb9-xcvs9\" (UID: \"19b07fe7-1025-43fc-a462-4aaef0fe9833\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xcvs9" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.377521 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-2gwnx"] Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.382913 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-2gwnx" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.383939 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6031f0e9-6391-4a28-8473-a458ec564ad6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc\" (UID: \"6031f0e9-6391-4a28-8473-a458ec564ad6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.383999 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6031f0e9-6391-4a28-8473-a458ec564ad6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc\" (UID: \"6031f0e9-6391-4a28-8473-a458ec564ad6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.384026 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7cb0c63a-cabc-45c2-84b7-54ae314e802d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf\" (UID: \"7cb0c63a-cabc-45c2-84b7-54ae314e802d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.384872 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7cb0c63a-cabc-45c2-84b7-54ae314e802d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf\" (UID: \"7cb0c63a-cabc-45c2-84b7-54ae314e802d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.388392 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7cb0c63a-cabc-45c2-84b7-54ae314e802d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf\" (UID: \"7cb0c63a-cabc-45c2-84b7-54ae314e802d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.391125 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.391839 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-5trtv" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.391819 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6031f0e9-6391-4a28-8473-a458ec564ad6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc\" (UID: \"6031f0e9-6391-4a28-8473-a458ec564ad6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.392540 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7cb0c63a-cabc-45c2-84b7-54ae314e802d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf\" (UID: \"7cb0c63a-cabc-45c2-84b7-54ae314e802d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.393662 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6031f0e9-6391-4a28-8473-a458ec564ad6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc\" (UID: \"6031f0e9-6391-4a28-8473-a458ec564ad6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.400203 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-2gwnx"] Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.416184 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xcvs9" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.486252 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1e753c42-8648-4fb6-afeb-6cb5218b1e37-observability-operator-tls\") pod \"observability-operator-59bdc8b94-2gwnx\" (UID: \"1e753c42-8648-4fb6-afeb-6cb5218b1e37\") " pod="openshift-operators/observability-operator-59bdc8b94-2gwnx" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.486330 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlr27\" (UniqueName: \"kubernetes.io/projected/1e753c42-8648-4fb6-afeb-6cb5218b1e37-kube-api-access-wlr27\") pod \"observability-operator-59bdc8b94-2gwnx\" (UID: \"1e753c42-8648-4fb6-afeb-6cb5218b1e37\") " pod="openshift-operators/observability-operator-59bdc8b94-2gwnx" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.487484 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.511994 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.512630 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.514170 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.587923 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1e753c42-8648-4fb6-afeb-6cb5218b1e37-observability-operator-tls\") pod \"observability-operator-59bdc8b94-2gwnx\" (UID: \"1e753c42-8648-4fb6-afeb-6cb5218b1e37\") " pod="openshift-operators/observability-operator-59bdc8b94-2gwnx" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.588008 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlr27\" (UniqueName: \"kubernetes.io/projected/1e753c42-8648-4fb6-afeb-6cb5218b1e37-kube-api-access-wlr27\") pod \"observability-operator-59bdc8b94-2gwnx\" (UID: \"1e753c42-8648-4fb6-afeb-6cb5218b1e37\") " pod="openshift-operators/observability-operator-59bdc8b94-2gwnx" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.600293 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1e753c42-8648-4fb6-afeb-6cb5218b1e37-observability-operator-tls\") pod \"observability-operator-59bdc8b94-2gwnx\" (UID: \"1e753c42-8648-4fb6-afeb-6cb5218b1e37\") " pod="openshift-operators/observability-operator-59bdc8b94-2gwnx" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.604789 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-k9d2w"] Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.605780 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-k9d2w" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.608037 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-8hv7l" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.609829 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-k9d2w"] Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.620858 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlr27\" (UniqueName: \"kubernetes.io/projected/1e753c42-8648-4fb6-afeb-6cb5218b1e37-kube-api-access-wlr27\") pod \"observability-operator-59bdc8b94-2gwnx\" (UID: \"1e753c42-8648-4fb6-afeb-6cb5218b1e37\") " pod="openshift-operators/observability-operator-59bdc8b94-2gwnx" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.689077 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5f9d7a6-d264-4964-8476-a72023915b07-openshift-service-ca\") pod \"perses-operator-5bf474d74f-k9d2w\" (UID: \"d5f9d7a6-d264-4964-8476-a72023915b07\") " pod="openshift-operators/perses-operator-5bf474d74f-k9d2w" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.689153 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4vms\" (UniqueName: \"kubernetes.io/projected/d5f9d7a6-d264-4964-8476-a72023915b07-kube-api-access-j4vms\") pod \"perses-operator-5bf474d74f-k9d2w\" (UID: \"d5f9d7a6-d264-4964-8476-a72023915b07\") " pod="openshift-operators/perses-operator-5bf474d74f-k9d2w" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.793681 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-xcvs9"] Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.794653 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5f9d7a6-d264-4964-8476-a72023915b07-openshift-service-ca\") pod \"perses-operator-5bf474d74f-k9d2w\" (UID: \"d5f9d7a6-d264-4964-8476-a72023915b07\") " pod="openshift-operators/perses-operator-5bf474d74f-k9d2w" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.794712 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4vms\" (UniqueName: \"kubernetes.io/projected/d5f9d7a6-d264-4964-8476-a72023915b07-kube-api-access-j4vms\") pod \"perses-operator-5bf474d74f-k9d2w\" (UID: \"d5f9d7a6-d264-4964-8476-a72023915b07\") " pod="openshift-operators/perses-operator-5bf474d74f-k9d2w" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.795924 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-2gwnx" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.796264 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5f9d7a6-d264-4964-8476-a72023915b07-openshift-service-ca\") pod \"perses-operator-5bf474d74f-k9d2w\" (UID: \"d5f9d7a6-d264-4964-8476-a72023915b07\") " pod="openshift-operators/perses-operator-5bf474d74f-k9d2w" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.821306 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4vms\" (UniqueName: \"kubernetes.io/projected/d5f9d7a6-d264-4964-8476-a72023915b07-kube-api-access-j4vms\") pod \"perses-operator-5bf474d74f-k9d2w\" (UID: \"d5f9d7a6-d264-4964-8476-a72023915b07\") " pod="openshift-operators/perses-operator-5bf474d74f-k9d2w" Jan 23 06:32:45 crc kubenswrapper[4784]: I0123 06:32:45.951472 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-k9d2w" Jan 23 06:32:46 crc kubenswrapper[4784]: I0123 06:32:46.067808 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-2gwnx"] Jan 23 06:32:46 crc kubenswrapper[4784]: I0123 06:32:46.226704 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf"] Jan 23 06:32:46 crc kubenswrapper[4784]: W0123 06:32:46.233193 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cb0c63a_cabc_45c2_84b7_54ae314e802d.slice/crio-8d92efd02ac615909a626f8f80fd6d0597df7fccaf7c0d391a651ec7c942727d WatchSource:0}: Error finding container 8d92efd02ac615909a626f8f80fd6d0597df7fccaf7c0d391a651ec7c942727d: Status 404 returned error can't find the container with id 8d92efd02ac615909a626f8f80fd6d0597df7fccaf7c0d391a651ec7c942727d Jan 23 06:32:46 crc kubenswrapper[4784]: I0123 06:32:46.235465 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc"] Jan 23 06:32:46 crc kubenswrapper[4784]: I0123 06:32:46.396447 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc" event={"ID":"6031f0e9-6391-4a28-8473-a458ec564ad6","Type":"ContainerStarted","Data":"a2021d2e724e12fd1db7e880348e0743cec16f56b7a8df711bddff3d22aec96c"} Jan 23 06:32:46 crc kubenswrapper[4784]: I0123 06:32:46.397441 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-2gwnx" event={"ID":"1e753c42-8648-4fb6-afeb-6cb5218b1e37","Type":"ContainerStarted","Data":"a85547cb667ef76186787033466f8879bd390cde3ae512f59549f6b8d36d6bda"} Jan 23 06:32:46 crc kubenswrapper[4784]: I0123 06:32:46.398210 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xcvs9" event={"ID":"19b07fe7-1025-43fc-a462-4aaef0fe9833","Type":"ContainerStarted","Data":"98d91afae89e7c597f0df15fc0329ff0beb264520aa8df17f090041881ee6821"} Jan 23 06:32:46 crc kubenswrapper[4784]: I0123 06:32:46.400322 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf" event={"ID":"7cb0c63a-cabc-45c2-84b7-54ae314e802d","Type":"ContainerStarted","Data":"8d92efd02ac615909a626f8f80fd6d0597df7fccaf7c0d391a651ec7c942727d"} Jan 23 06:32:46 crc kubenswrapper[4784]: I0123 06:32:46.455788 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-k9d2w"] Jan 23 06:32:46 crc kubenswrapper[4784]: W0123 06:32:46.488534 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5f9d7a6_d264_4964_8476_a72023915b07.slice/crio-0377ef092d51072d5888104d4f70f429990c17781ab83db3cd7c1eec015b6fbe WatchSource:0}: Error finding container 0377ef092d51072d5888104d4f70f429990c17781ab83db3cd7c1eec015b6fbe: Status 404 returned error can't find the container with id 0377ef092d51072d5888104d4f70f429990c17781ab83db3cd7c1eec015b6fbe Jan 23 06:32:46 crc kubenswrapper[4784]: I0123 06:32:46.584078 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q9rrj" podUID="26deebeb-d3dd-4553-b6c4-ff2275bcc702" containerName="registry-server" probeResult="failure" output=< Jan 23 06:32:46 crc kubenswrapper[4784]: timeout: failed to connect service ":50051" within 1s Jan 23 06:32:46 crc kubenswrapper[4784]: > Jan 23 06:32:47 crc kubenswrapper[4784]: I0123 06:32:47.407137 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-k9d2w" event={"ID":"d5f9d7a6-d264-4964-8476-a72023915b07","Type":"ContainerStarted","Data":"0377ef092d51072d5888104d4f70f429990c17781ab83db3cd7c1eec015b6fbe"} Jan 23 06:32:53 crc kubenswrapper[4784]: I0123 06:32:53.832661 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:32:53 crc kubenswrapper[4784]: I0123 06:32:53.833242 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:32:53 crc kubenswrapper[4784]: I0123 06:32:53.833292 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:32:53 crc kubenswrapper[4784]: I0123 06:32:53.833949 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5f3a59b1e59c1bd355b45488149c87185e092896ddb07392d0e3d03fa4214d5"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:32:53 crc kubenswrapper[4784]: I0123 06:32:53.833997 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://d5f3a59b1e59c1bd355b45488149c87185e092896ddb07392d0e3d03fa4214d5" gracePeriod=600 Jan 23 06:32:54 crc kubenswrapper[4784]: I0123 06:32:54.497500 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="d5f3a59b1e59c1bd355b45488149c87185e092896ddb07392d0e3d03fa4214d5" exitCode=0 Jan 23 06:32:54 crc kubenswrapper[4784]: I0123 06:32:54.497903 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"d5f3a59b1e59c1bd355b45488149c87185e092896ddb07392d0e3d03fa4214d5"} Jan 23 06:32:54 crc kubenswrapper[4784]: I0123 06:32:54.497951 4784 scope.go:117] "RemoveContainer" containerID="8bb3afbe52b02da92cf41fff533908180b367876974272e6d79c68e76c0b0d9e" Jan 23 06:32:55 crc kubenswrapper[4784]: I0123 06:32:55.870971 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:32:56 crc kubenswrapper[4784]: I0123 06:32:56.621362 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:32:56 crc kubenswrapper[4784]: I0123 06:32:56.705112 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q9rrj"] Jan 23 06:32:57 crc kubenswrapper[4784]: I0123 06:32:57.634468 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q9rrj" podUID="26deebeb-d3dd-4553-b6c4-ff2275bcc702" containerName="registry-server" containerID="cri-o://ef8f715f886792a346616c2a50e371653ce9d3310efaecead63306bb63609d9f" gracePeriod=2 Jan 23 06:32:58 crc kubenswrapper[4784]: I0123 06:32:58.645932 4784 generic.go:334] "Generic (PLEG): container finished" podID="26deebeb-d3dd-4553-b6c4-ff2275bcc702" containerID="ef8f715f886792a346616c2a50e371653ce9d3310efaecead63306bb63609d9f" exitCode=0 Jan 23 06:32:58 crc kubenswrapper[4784]: I0123 06:32:58.646010 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9rrj" event={"ID":"26deebeb-d3dd-4553-b6c4-ff2275bcc702","Type":"ContainerDied","Data":"ef8f715f886792a346616c2a50e371653ce9d3310efaecead63306bb63609d9f"} Jan 23 06:33:02 crc kubenswrapper[4784]: E0123 06:33:02.411287 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" Jan 23 06:33:02 crc kubenswrapper[4784]: E0123 06:33:02.411762 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4vms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5bf474d74f-k9d2w_openshift-operators(d5f9d7a6-d264-4964-8476-a72023915b07): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:33:02 crc kubenswrapper[4784]: E0123 06:33:02.412879 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5bf474d74f-k9d2w" podUID="d5f9d7a6-d264-4964-8476-a72023915b07" Jan 23 06:33:02 crc kubenswrapper[4784]: E0123 06:33:02.676703 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8\\\"\"" pod="openshift-operators/perses-operator-5bf474d74f-k9d2w" podUID="d5f9d7a6-d264-4964-8476-a72023915b07" Jan 23 06:33:05 crc kubenswrapper[4784]: E0123 06:33:05.515310 4784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef8f715f886792a346616c2a50e371653ce9d3310efaecead63306bb63609d9f is running failed: container process not found" containerID="ef8f715f886792a346616c2a50e371653ce9d3310efaecead63306bb63609d9f" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 06:33:05 crc kubenswrapper[4784]: E0123 06:33:05.517267 4784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef8f715f886792a346616c2a50e371653ce9d3310efaecead63306bb63609d9f is running failed: container process not found" containerID="ef8f715f886792a346616c2a50e371653ce9d3310efaecead63306bb63609d9f" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 06:33:05 crc kubenswrapper[4784]: E0123 06:33:05.517660 4784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef8f715f886792a346616c2a50e371653ce9d3310efaecead63306bb63609d9f is running failed: container process not found" containerID="ef8f715f886792a346616c2a50e371653ce9d3310efaecead63306bb63609d9f" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 06:33:05 crc kubenswrapper[4784]: E0123 06:33:05.517711 4784 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef8f715f886792a346616c2a50e371653ce9d3310efaecead63306bb63609d9f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-q9rrj" podUID="26deebeb-d3dd-4553-b6c4-ff2275bcc702" containerName="registry-server" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.121421 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.217951 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26deebeb-d3dd-4553-b6c4-ff2275bcc702-catalog-content\") pod \"26deebeb-d3dd-4553-b6c4-ff2275bcc702\" (UID: \"26deebeb-d3dd-4553-b6c4-ff2275bcc702\") " Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.218010 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbbsn\" (UniqueName: \"kubernetes.io/projected/26deebeb-d3dd-4553-b6c4-ff2275bcc702-kube-api-access-nbbsn\") pod \"26deebeb-d3dd-4553-b6c4-ff2275bcc702\" (UID: \"26deebeb-d3dd-4553-b6c4-ff2275bcc702\") " Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.218274 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26deebeb-d3dd-4553-b6c4-ff2275bcc702-utilities\") pod \"26deebeb-d3dd-4553-b6c4-ff2275bcc702\" (UID: \"26deebeb-d3dd-4553-b6c4-ff2275bcc702\") " Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.219250 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26deebeb-d3dd-4553-b6c4-ff2275bcc702-utilities" (OuterVolumeSpecName: "utilities") pod "26deebeb-d3dd-4553-b6c4-ff2275bcc702" (UID: "26deebeb-d3dd-4553-b6c4-ff2275bcc702"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.227058 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26deebeb-d3dd-4553-b6c4-ff2275bcc702-kube-api-access-nbbsn" (OuterVolumeSpecName: "kube-api-access-nbbsn") pod "26deebeb-d3dd-4553-b6c4-ff2275bcc702" (UID: "26deebeb-d3dd-4553-b6c4-ff2275bcc702"). InnerVolumeSpecName "kube-api-access-nbbsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.320819 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26deebeb-d3dd-4553-b6c4-ff2275bcc702-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.320883 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbbsn\" (UniqueName: \"kubernetes.io/projected/26deebeb-d3dd-4553-b6c4-ff2275bcc702-kube-api-access-nbbsn\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.336930 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26deebeb-d3dd-4553-b6c4-ff2275bcc702-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26deebeb-d3dd-4553-b6c4-ff2275bcc702" (UID: "26deebeb-d3dd-4553-b6c4-ff2275bcc702"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.423066 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26deebeb-d3dd-4553-b6c4-ff2275bcc702-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.812080 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf" event={"ID":"7cb0c63a-cabc-45c2-84b7-54ae314e802d","Type":"ContainerStarted","Data":"12c5a6d2c973fd25192ac38d02f88cf4dcb6c5334c4f0bd25ba08356ec1b4755"} Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.815341 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc" event={"ID":"6031f0e9-6391-4a28-8473-a458ec564ad6","Type":"ContainerStarted","Data":"3586a873e9f82fdcf9fce92e2685b9b86d110c252f32354b1a15737d1446e834"} Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.818139 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"ba1cd80d1af05627cca4bf817be8d5ac071e1d0a3b4a67cef6e491a9167052a0"} Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.819927 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-2gwnx" event={"ID":"1e753c42-8648-4fb6-afeb-6cb5218b1e37","Type":"ContainerStarted","Data":"5d25d4d1ad187a322afa6d582f11f5b64fe9299ab0f82b9cf0ec1589b464e8e1"} Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.820154 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-2gwnx" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.822505 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xcvs9" event={"ID":"19b07fe7-1025-43fc-a462-4aaef0fe9833","Type":"ContainerStarted","Data":"e292439ae5823985d51ba7b854cc567a1a2ee46767103d0832dde8b90301c797"} Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.826478 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-2gwnx" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.827046 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q9rrj" event={"ID":"26deebeb-d3dd-4553-b6c4-ff2275bcc702","Type":"ContainerDied","Data":"d83cebba9355a27d708df80833ab9d54a9a97e0c01e86ca772880775994cdf44"} Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.827124 4784 scope.go:117] "RemoveContainer" containerID="ef8f715f886792a346616c2a50e371653ce9d3310efaecead63306bb63609d9f" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.827287 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q9rrj" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.851084 4784 scope.go:117] "RemoveContainer" containerID="043356f9ec4fd23c1adb540ebd795af08571f23da5ee942a3f40920bd439eeff" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.853608 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf" podStartSLOduration=2.188130045 podStartE2EDuration="21.853573028s" podCreationTimestamp="2026-01-23 06:32:45 +0000 UTC" firstStartedPulling="2026-01-23 06:32:46.237456929 +0000 UTC m=+769.469964903" lastFinishedPulling="2026-01-23 06:33:05.902899912 +0000 UTC m=+789.135407886" observedRunningTime="2026-01-23 06:33:06.843676673 +0000 UTC m=+790.076184657" watchObservedRunningTime="2026-01-23 06:33:06.853573028 +0000 UTC m=+790.086081002" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.886421 4784 scope.go:117] "RemoveContainer" containerID="47cde0de3d8e90acab892f4e9e2baa93f1ee0b9c2af1bce5668312e57bc515cc" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.900501 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xcvs9" podStartSLOduration=1.808387244 podStartE2EDuration="21.900475434s" podCreationTimestamp="2026-01-23 06:32:45 +0000 UTC" firstStartedPulling="2026-01-23 06:32:45.808926615 +0000 UTC m=+769.041434589" lastFinishedPulling="2026-01-23 06:33:05.901014785 +0000 UTC m=+789.133522779" observedRunningTime="2026-01-23 06:33:06.896306781 +0000 UTC m=+790.128814755" watchObservedRunningTime="2026-01-23 06:33:06.900475434 +0000 UTC m=+790.132983408" Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.949132 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q9rrj"] Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.952139 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q9rrj"] Jan 23 06:33:06 crc kubenswrapper[4784]: I0123 06:33:06.999332 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-2gwnx" podStartSLOduration=2.134313546 podStartE2EDuration="21.999302713s" podCreationTimestamp="2026-01-23 06:32:45 +0000 UTC" firstStartedPulling="2026-01-23 06:32:46.080167657 +0000 UTC m=+769.312675631" lastFinishedPulling="2026-01-23 06:33:05.945156824 +0000 UTC m=+789.177664798" observedRunningTime="2026-01-23 06:33:06.998497623 +0000 UTC m=+790.231005597" watchObservedRunningTime="2026-01-23 06:33:06.999302713 +0000 UTC m=+790.231810687" Jan 23 06:33:07 crc kubenswrapper[4784]: I0123 06:33:07.031796 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc" podStartSLOduration=2.386840327 podStartE2EDuration="22.031777584s" podCreationTimestamp="2026-01-23 06:32:45 +0000 UTC" firstStartedPulling="2026-01-23 06:32:46.254282544 +0000 UTC m=+769.486790518" lastFinishedPulling="2026-01-23 06:33:05.899219801 +0000 UTC m=+789.131727775" observedRunningTime="2026-01-23 06:33:07.030964395 +0000 UTC m=+790.263472369" watchObservedRunningTime="2026-01-23 06:33:07.031777584 +0000 UTC m=+790.264285578" Jan 23 06:33:07 crc kubenswrapper[4784]: I0123 06:33:07.262030 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26deebeb-d3dd-4553-b6c4-ff2275bcc702" path="/var/lib/kubelet/pods/26deebeb-d3dd-4553-b6c4-ff2275bcc702/volumes" Jan 23 06:33:19 crc kubenswrapper[4784]: I0123 06:33:19.936170 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-k9d2w" event={"ID":"d5f9d7a6-d264-4964-8476-a72023915b07","Type":"ContainerStarted","Data":"a78aba0e815c95d9beb67416810419eaba0211db887a72288529926994ac6f9c"} Jan 23 06:33:19 crc kubenswrapper[4784]: I0123 06:33:19.937542 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-k9d2w" Jan 23 06:33:19 crc kubenswrapper[4784]: I0123 06:33:19.960218 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-k9d2w" podStartSLOduration=2.458002022 podStartE2EDuration="34.960192736s" podCreationTimestamp="2026-01-23 06:32:45 +0000 UTC" firstStartedPulling="2026-01-23 06:32:46.493636858 +0000 UTC m=+769.726144832" lastFinishedPulling="2026-01-23 06:33:18.995827562 +0000 UTC m=+802.228335546" observedRunningTime="2026-01-23 06:33:19.956202177 +0000 UTC m=+803.188710171" watchObservedRunningTime="2026-01-23 06:33:19.960192736 +0000 UTC m=+803.192700710" Jan 23 06:33:25 crc kubenswrapper[4784]: I0123 06:33:25.955769 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-k9d2w" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.821958 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc"] Jan 23 06:33:45 crc kubenswrapper[4784]: E0123 06:33:45.823060 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26deebeb-d3dd-4553-b6c4-ff2275bcc702" containerName="extract-utilities" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.823079 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="26deebeb-d3dd-4553-b6c4-ff2275bcc702" containerName="extract-utilities" Jan 23 06:33:45 crc kubenswrapper[4784]: E0123 06:33:45.823093 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26deebeb-d3dd-4553-b6c4-ff2275bcc702" containerName="registry-server" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.823102 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="26deebeb-d3dd-4553-b6c4-ff2275bcc702" containerName="registry-server" Jan 23 06:33:45 crc kubenswrapper[4784]: E0123 06:33:45.823117 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26deebeb-d3dd-4553-b6c4-ff2275bcc702" containerName="extract-content" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.823127 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="26deebeb-d3dd-4553-b6c4-ff2275bcc702" containerName="extract-content" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.823272 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="26deebeb-d3dd-4553-b6c4-ff2275bcc702" containerName="registry-server" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.824385 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.827602 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.831426 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc"] Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.839586 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8qbd\" (UniqueName: \"kubernetes.io/projected/eac5d9d9-1017-4063-b4a9-18b05eece465-kube-api-access-r8qbd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc\" (UID: \"eac5d9d9-1017-4063-b4a9-18b05eece465\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.839703 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eac5d9d9-1017-4063-b4a9-18b05eece465-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc\" (UID: \"eac5d9d9-1017-4063-b4a9-18b05eece465\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.839772 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eac5d9d9-1017-4063-b4a9-18b05eece465-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc\" (UID: \"eac5d9d9-1017-4063-b4a9-18b05eece465\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.940951 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qbd\" (UniqueName: \"kubernetes.io/projected/eac5d9d9-1017-4063-b4a9-18b05eece465-kube-api-access-r8qbd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc\" (UID: \"eac5d9d9-1017-4063-b4a9-18b05eece465\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.941017 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eac5d9d9-1017-4063-b4a9-18b05eece465-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc\" (UID: \"eac5d9d9-1017-4063-b4a9-18b05eece465\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.941055 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eac5d9d9-1017-4063-b4a9-18b05eece465-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc\" (UID: \"eac5d9d9-1017-4063-b4a9-18b05eece465\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.941660 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eac5d9d9-1017-4063-b4a9-18b05eece465-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc\" (UID: \"eac5d9d9-1017-4063-b4a9-18b05eece465\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.941808 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eac5d9d9-1017-4063-b4a9-18b05eece465-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc\" (UID: \"eac5d9d9-1017-4063-b4a9-18b05eece465\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" Jan 23 06:33:45 crc kubenswrapper[4784]: I0123 06:33:45.961918 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8qbd\" (UniqueName: \"kubernetes.io/projected/eac5d9d9-1017-4063-b4a9-18b05eece465-kube-api-access-r8qbd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc\" (UID: \"eac5d9d9-1017-4063-b4a9-18b05eece465\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" Jan 23 06:33:46 crc kubenswrapper[4784]: I0123 06:33:46.203071 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" Jan 23 06:33:46 crc kubenswrapper[4784]: I0123 06:33:46.442118 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc"] Jan 23 06:33:46 crc kubenswrapper[4784]: W0123 06:33:46.451267 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeac5d9d9_1017_4063_b4a9_18b05eece465.slice/crio-ca1207473b5c16250ac6bb087898b6d90cec5cd30a1bea2586f0a9ae291bf43b WatchSource:0}: Error finding container ca1207473b5c16250ac6bb087898b6d90cec5cd30a1bea2586f0a9ae291bf43b: Status 404 returned error can't find the container with id ca1207473b5c16250ac6bb087898b6d90cec5cd30a1bea2586f0a9ae291bf43b Jan 23 06:33:47 crc kubenswrapper[4784]: I0123 06:33:47.224576 4784 generic.go:334] "Generic (PLEG): container finished" podID="eac5d9d9-1017-4063-b4a9-18b05eece465" containerID="b7c9154654634721a3fc5f27a518d9fb3129a148c7f917978b587c9c7c83c1d0" exitCode=0 Jan 23 06:33:47 crc kubenswrapper[4784]: I0123 06:33:47.224659 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" event={"ID":"eac5d9d9-1017-4063-b4a9-18b05eece465","Type":"ContainerDied","Data":"b7c9154654634721a3fc5f27a518d9fb3129a148c7f917978b587c9c7c83c1d0"} Jan 23 06:33:47 crc kubenswrapper[4784]: I0123 06:33:47.224709 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" event={"ID":"eac5d9d9-1017-4063-b4a9-18b05eece465","Type":"ContainerStarted","Data":"ca1207473b5c16250ac6bb087898b6d90cec5cd30a1bea2586f0a9ae291bf43b"} Jan 23 06:33:51 crc kubenswrapper[4784]: I0123 06:33:51.260982 4784 generic.go:334] "Generic (PLEG): container finished" podID="eac5d9d9-1017-4063-b4a9-18b05eece465" containerID="a0feb1683bc7ea86e4d35ef7d914d345afbd07c613a2def8df9668cc1e0c51dc" exitCode=0 Jan 23 06:33:51 crc kubenswrapper[4784]: I0123 06:33:51.262160 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" event={"ID":"eac5d9d9-1017-4063-b4a9-18b05eece465","Type":"ContainerDied","Data":"a0feb1683bc7ea86e4d35ef7d914d345afbd07c613a2def8df9668cc1e0c51dc"} Jan 23 06:33:52 crc kubenswrapper[4784]: I0123 06:33:52.273360 4784 generic.go:334] "Generic (PLEG): container finished" podID="eac5d9d9-1017-4063-b4a9-18b05eece465" containerID="e5323e82f5d803f4ec809a70f1875dc9c5e6c6adc9f9759129fd611ad196368f" exitCode=0 Jan 23 06:33:52 crc kubenswrapper[4784]: I0123 06:33:52.273600 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" event={"ID":"eac5d9d9-1017-4063-b4a9-18b05eece465","Type":"ContainerDied","Data":"e5323e82f5d803f4ec809a70f1875dc9c5e6c6adc9f9759129fd611ad196368f"} Jan 23 06:33:53 crc kubenswrapper[4784]: I0123 06:33:53.671330 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" Jan 23 06:33:53 crc kubenswrapper[4784]: I0123 06:33:53.866845 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8qbd\" (UniqueName: \"kubernetes.io/projected/eac5d9d9-1017-4063-b4a9-18b05eece465-kube-api-access-r8qbd\") pod \"eac5d9d9-1017-4063-b4a9-18b05eece465\" (UID: \"eac5d9d9-1017-4063-b4a9-18b05eece465\") " Jan 23 06:33:53 crc kubenswrapper[4784]: I0123 06:33:53.867060 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eac5d9d9-1017-4063-b4a9-18b05eece465-util\") pod \"eac5d9d9-1017-4063-b4a9-18b05eece465\" (UID: \"eac5d9d9-1017-4063-b4a9-18b05eece465\") " Jan 23 06:33:53 crc kubenswrapper[4784]: I0123 06:33:53.867132 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eac5d9d9-1017-4063-b4a9-18b05eece465-bundle\") pod \"eac5d9d9-1017-4063-b4a9-18b05eece465\" (UID: \"eac5d9d9-1017-4063-b4a9-18b05eece465\") " Jan 23 06:33:53 crc kubenswrapper[4784]: I0123 06:33:53.868554 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eac5d9d9-1017-4063-b4a9-18b05eece465-bundle" (OuterVolumeSpecName: "bundle") pod "eac5d9d9-1017-4063-b4a9-18b05eece465" (UID: "eac5d9d9-1017-4063-b4a9-18b05eece465"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:33:53 crc kubenswrapper[4784]: I0123 06:33:53.878421 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eac5d9d9-1017-4063-b4a9-18b05eece465-kube-api-access-r8qbd" (OuterVolumeSpecName: "kube-api-access-r8qbd") pod "eac5d9d9-1017-4063-b4a9-18b05eece465" (UID: "eac5d9d9-1017-4063-b4a9-18b05eece465"). InnerVolumeSpecName "kube-api-access-r8qbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:53 crc kubenswrapper[4784]: I0123 06:33:53.889430 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eac5d9d9-1017-4063-b4a9-18b05eece465-util" (OuterVolumeSpecName: "util") pod "eac5d9d9-1017-4063-b4a9-18b05eece465" (UID: "eac5d9d9-1017-4063-b4a9-18b05eece465"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:33:53 crc kubenswrapper[4784]: I0123 06:33:53.969163 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8qbd\" (UniqueName: \"kubernetes.io/projected/eac5d9d9-1017-4063-b4a9-18b05eece465-kube-api-access-r8qbd\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:53 crc kubenswrapper[4784]: I0123 06:33:53.969210 4784 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eac5d9d9-1017-4063-b4a9-18b05eece465-util\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:53 crc kubenswrapper[4784]: I0123 06:33:53.969224 4784 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eac5d9d9-1017-4063-b4a9-18b05eece465-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:54 crc kubenswrapper[4784]: I0123 06:33:54.293839 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" event={"ID":"eac5d9d9-1017-4063-b4a9-18b05eece465","Type":"ContainerDied","Data":"ca1207473b5c16250ac6bb087898b6d90cec5cd30a1bea2586f0a9ae291bf43b"} Jan 23 06:33:54 crc kubenswrapper[4784]: I0123 06:33:54.293927 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca1207473b5c16250ac6bb087898b6d90cec5cd30a1bea2586f0a9ae291bf43b" Jan 23 06:33:54 crc kubenswrapper[4784]: I0123 06:33:54.294475 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc" Jan 23 06:33:57 crc kubenswrapper[4784]: I0123 06:33:57.418490 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r4rm7"] Jan 23 06:33:57 crc kubenswrapper[4784]: E0123 06:33:57.419169 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac5d9d9-1017-4063-b4a9-18b05eece465" containerName="util" Jan 23 06:33:57 crc kubenswrapper[4784]: I0123 06:33:57.419185 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac5d9d9-1017-4063-b4a9-18b05eece465" containerName="util" Jan 23 06:33:57 crc kubenswrapper[4784]: E0123 06:33:57.419208 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac5d9d9-1017-4063-b4a9-18b05eece465" containerName="extract" Jan 23 06:33:57 crc kubenswrapper[4784]: I0123 06:33:57.419215 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac5d9d9-1017-4063-b4a9-18b05eece465" containerName="extract" Jan 23 06:33:57 crc kubenswrapper[4784]: E0123 06:33:57.419229 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac5d9d9-1017-4063-b4a9-18b05eece465" containerName="pull" Jan 23 06:33:57 crc kubenswrapper[4784]: I0123 06:33:57.419235 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac5d9d9-1017-4063-b4a9-18b05eece465" containerName="pull" Jan 23 06:33:57 crc kubenswrapper[4784]: I0123 06:33:57.419353 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="eac5d9d9-1017-4063-b4a9-18b05eece465" containerName="extract" Jan 23 06:33:57 crc kubenswrapper[4784]: I0123 06:33:57.419981 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-r4rm7" Jan 23 06:33:57 crc kubenswrapper[4784]: I0123 06:33:57.427194 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 23 06:33:57 crc kubenswrapper[4784]: I0123 06:33:57.427347 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-59xrk" Jan 23 06:33:57 crc kubenswrapper[4784]: I0123 06:33:57.427878 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 23 06:33:57 crc kubenswrapper[4784]: I0123 06:33:57.431527 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mhm\" (UniqueName: \"kubernetes.io/projected/58e69dbd-d9f9-48a5-8600-e14bda89ab89-kube-api-access-w5mhm\") pod \"nmstate-operator-646758c888-r4rm7\" (UID: \"58e69dbd-d9f9-48a5-8600-e14bda89ab89\") " pod="openshift-nmstate/nmstate-operator-646758c888-r4rm7" Jan 23 06:33:57 crc kubenswrapper[4784]: I0123 06:33:57.436409 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r4rm7"] Jan 23 06:33:57 crc kubenswrapper[4784]: I0123 06:33:57.533479 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5mhm\" (UniqueName: \"kubernetes.io/projected/58e69dbd-d9f9-48a5-8600-e14bda89ab89-kube-api-access-w5mhm\") pod \"nmstate-operator-646758c888-r4rm7\" (UID: \"58e69dbd-d9f9-48a5-8600-e14bda89ab89\") " pod="openshift-nmstate/nmstate-operator-646758c888-r4rm7" Jan 23 06:33:57 crc kubenswrapper[4784]: I0123 06:33:57.563595 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5mhm\" (UniqueName: \"kubernetes.io/projected/58e69dbd-d9f9-48a5-8600-e14bda89ab89-kube-api-access-w5mhm\") pod \"nmstate-operator-646758c888-r4rm7\" (UID: \"58e69dbd-d9f9-48a5-8600-e14bda89ab89\") " pod="openshift-nmstate/nmstate-operator-646758c888-r4rm7" Jan 23 06:33:57 crc kubenswrapper[4784]: I0123 06:33:57.741923 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-r4rm7" Jan 23 06:33:58 crc kubenswrapper[4784]: I0123 06:33:58.195300 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r4rm7"] Jan 23 06:33:58 crc kubenswrapper[4784]: I0123 06:33:58.336225 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-r4rm7" event={"ID":"58e69dbd-d9f9-48a5-8600-e14bda89ab89","Type":"ContainerStarted","Data":"4c2488830010ab7bf68ea04f01532817d1f957a80f12643261d1ce89d5d53578"} Jan 23 06:34:01 crc kubenswrapper[4784]: I0123 06:34:01.359829 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-r4rm7" event={"ID":"58e69dbd-d9f9-48a5-8600-e14bda89ab89","Type":"ContainerStarted","Data":"f5621344aa5f79decda521a7cac94c0fe9b7ee30cd2e2108e8cfe809a7dfc807"} Jan 23 06:34:01 crc kubenswrapper[4784]: I0123 06:34:01.383139 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-r4rm7" podStartSLOduration=1.431316279 podStartE2EDuration="4.383116509s" podCreationTimestamp="2026-01-23 06:33:57 +0000 UTC" firstStartedPulling="2026-01-23 06:33:58.209782673 +0000 UTC m=+841.442290647" lastFinishedPulling="2026-01-23 06:34:01.161582903 +0000 UTC m=+844.394090877" observedRunningTime="2026-01-23 06:34:01.379367446 +0000 UTC m=+844.611875440" watchObservedRunningTime="2026-01-23 06:34:01.383116509 +0000 UTC m=+844.615624483" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.368360 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9"] Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.370972 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.373885 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-kw56f"] Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.375018 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-kw56f" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.375846 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.377134 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-x5h92" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.381574 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9zpz\" (UniqueName: \"kubernetes.io/projected/21b9c71e-8dc5-41c7-86a3-9d840f155413-kube-api-access-r9zpz\") pod \"nmstate-webhook-8474b5b9d8-cjbl9\" (UID: \"21b9c71e-8dc5-41c7-86a3-9d840f155413\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.381653 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/21b9c71e-8dc5-41c7-86a3-9d840f155413-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-cjbl9\" (UID: \"21b9c71e-8dc5-41c7-86a3-9d840f155413\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.389366 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9"] Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.409443 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-kw56f"] Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.420118 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-m77xg"] Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.421120 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.482828 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/40a88789-2452-42ca-9b44-14a6614b413c-ovs-socket\") pod \"nmstate-handler-m77xg\" (UID: \"40a88789-2452-42ca-9b44-14a6614b413c\") " pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.482925 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftnn5\" (UniqueName: \"kubernetes.io/projected/69d35b69-1071-41f1-ba7c-37f25670f4cb-kube-api-access-ftnn5\") pod \"nmstate-metrics-54757c584b-kw56f\" (UID: \"69d35b69-1071-41f1-ba7c-37f25670f4cb\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-kw56f" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.482962 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tckzz\" (UniqueName: \"kubernetes.io/projected/40a88789-2452-42ca-9b44-14a6614b413c-kube-api-access-tckzz\") pod \"nmstate-handler-m77xg\" (UID: \"40a88789-2452-42ca-9b44-14a6614b413c\") " pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.482997 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9zpz\" (UniqueName: \"kubernetes.io/projected/21b9c71e-8dc5-41c7-86a3-9d840f155413-kube-api-access-r9zpz\") pod \"nmstate-webhook-8474b5b9d8-cjbl9\" (UID: \"21b9c71e-8dc5-41c7-86a3-9d840f155413\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.483085 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/21b9c71e-8dc5-41c7-86a3-9d840f155413-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-cjbl9\" (UID: \"21b9c71e-8dc5-41c7-86a3-9d840f155413\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.483111 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/40a88789-2452-42ca-9b44-14a6614b413c-nmstate-lock\") pod \"nmstate-handler-m77xg\" (UID: \"40a88789-2452-42ca-9b44-14a6614b413c\") " pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.483133 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/40a88789-2452-42ca-9b44-14a6614b413c-dbus-socket\") pod \"nmstate-handler-m77xg\" (UID: \"40a88789-2452-42ca-9b44-14a6614b413c\") " pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:07 crc kubenswrapper[4784]: E0123 06:34:07.483206 4784 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 23 06:34:07 crc kubenswrapper[4784]: E0123 06:34:07.483266 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21b9c71e-8dc5-41c7-86a3-9d840f155413-tls-key-pair podName:21b9c71e-8dc5-41c7-86a3-9d840f155413 nodeName:}" failed. No retries permitted until 2026-01-23 06:34:07.983244615 +0000 UTC m=+851.215752589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/21b9c71e-8dc5-41c7-86a3-9d840f155413-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-cjbl9" (UID: "21b9c71e-8dc5-41c7-86a3-9d840f155413") : secret "openshift-nmstate-webhook" not found Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.510864 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9zpz\" (UniqueName: \"kubernetes.io/projected/21b9c71e-8dc5-41c7-86a3-9d840f155413-kube-api-access-r9zpz\") pod \"nmstate-webhook-8474b5b9d8-cjbl9\" (UID: \"21b9c71e-8dc5-41c7-86a3-9d840f155413\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.534110 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77"] Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.542367 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.545831 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.547835 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.547843 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cgq86" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.567700 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77"] Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.584281 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/171baadd-8608-486e-a418-65f76de1cf06-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5tb77\" (UID: \"171baadd-8608-486e-a418-65f76de1cf06\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.584357 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftnn5\" (UniqueName: \"kubernetes.io/projected/69d35b69-1071-41f1-ba7c-37f25670f4cb-kube-api-access-ftnn5\") pod \"nmstate-metrics-54757c584b-kw56f\" (UID: \"69d35b69-1071-41f1-ba7c-37f25670f4cb\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-kw56f" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.584400 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tckzz\" (UniqueName: \"kubernetes.io/projected/40a88789-2452-42ca-9b44-14a6614b413c-kube-api-access-tckzz\") pod \"nmstate-handler-m77xg\" (UID: \"40a88789-2452-42ca-9b44-14a6614b413c\") " pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.584462 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/40a88789-2452-42ca-9b44-14a6614b413c-nmstate-lock\") pod \"nmstate-handler-m77xg\" (UID: \"40a88789-2452-42ca-9b44-14a6614b413c\") " pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.584483 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/40a88789-2452-42ca-9b44-14a6614b413c-dbus-socket\") pod \"nmstate-handler-m77xg\" (UID: \"40a88789-2452-42ca-9b44-14a6614b413c\") " pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.584513 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lgcf\" (UniqueName: \"kubernetes.io/projected/171baadd-8608-486e-a418-65f76de1cf06-kube-api-access-9lgcf\") pod \"nmstate-console-plugin-7754f76f8b-5tb77\" (UID: \"171baadd-8608-486e-a418-65f76de1cf06\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.584558 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/40a88789-2452-42ca-9b44-14a6614b413c-ovs-socket\") pod \"nmstate-handler-m77xg\" (UID: \"40a88789-2452-42ca-9b44-14a6614b413c\") " pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.584578 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/171baadd-8608-486e-a418-65f76de1cf06-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5tb77\" (UID: \"171baadd-8608-486e-a418-65f76de1cf06\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.584614 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/40a88789-2452-42ca-9b44-14a6614b413c-nmstate-lock\") pod \"nmstate-handler-m77xg\" (UID: \"40a88789-2452-42ca-9b44-14a6614b413c\") " pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.584676 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/40a88789-2452-42ca-9b44-14a6614b413c-ovs-socket\") pod \"nmstate-handler-m77xg\" (UID: \"40a88789-2452-42ca-9b44-14a6614b413c\") " pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.584962 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/40a88789-2452-42ca-9b44-14a6614b413c-dbus-socket\") pod \"nmstate-handler-m77xg\" (UID: \"40a88789-2452-42ca-9b44-14a6614b413c\") " pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.604359 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tckzz\" (UniqueName: \"kubernetes.io/projected/40a88789-2452-42ca-9b44-14a6614b413c-kube-api-access-tckzz\") pod \"nmstate-handler-m77xg\" (UID: \"40a88789-2452-42ca-9b44-14a6614b413c\") " pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.622941 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftnn5\" (UniqueName: \"kubernetes.io/projected/69d35b69-1071-41f1-ba7c-37f25670f4cb-kube-api-access-ftnn5\") pod \"nmstate-metrics-54757c584b-kw56f\" (UID: \"69d35b69-1071-41f1-ba7c-37f25670f4cb\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-kw56f" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.686629 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lgcf\" (UniqueName: \"kubernetes.io/projected/171baadd-8608-486e-a418-65f76de1cf06-kube-api-access-9lgcf\") pod \"nmstate-console-plugin-7754f76f8b-5tb77\" (UID: \"171baadd-8608-486e-a418-65f76de1cf06\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.687165 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/171baadd-8608-486e-a418-65f76de1cf06-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5tb77\" (UID: \"171baadd-8608-486e-a418-65f76de1cf06\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.687192 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/171baadd-8608-486e-a418-65f76de1cf06-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5tb77\" (UID: \"171baadd-8608-486e-a418-65f76de1cf06\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77" Jan 23 06:34:07 crc kubenswrapper[4784]: E0123 06:34:07.687284 4784 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 23 06:34:07 crc kubenswrapper[4784]: E0123 06:34:07.687380 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/171baadd-8608-486e-a418-65f76de1cf06-plugin-serving-cert podName:171baadd-8608-486e-a418-65f76de1cf06 nodeName:}" failed. No retries permitted until 2026-01-23 06:34:08.187353841 +0000 UTC m=+851.419861815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/171baadd-8608-486e-a418-65f76de1cf06-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-5tb77" (UID: "171baadd-8608-486e-a418-65f76de1cf06") : secret "plugin-serving-cert" not found Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.688691 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/171baadd-8608-486e-a418-65f76de1cf06-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5tb77\" (UID: \"171baadd-8608-486e-a418-65f76de1cf06\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.704040 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-kw56f" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.719460 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lgcf\" (UniqueName: \"kubernetes.io/projected/171baadd-8608-486e-a418-65f76de1cf06-kube-api-access-9lgcf\") pod \"nmstate-console-plugin-7754f76f8b-5tb77\" (UID: \"171baadd-8608-486e-a418-65f76de1cf06\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.733930 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-58d484d7c8-xhg4d"] Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.735089 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.735833 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.756628 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58d484d7c8-xhg4d"] Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.789497 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-console-oauth-config\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.789573 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-trusted-ca-bundle\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.789617 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-oauth-serving-cert\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.789664 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqskm\" (UniqueName: \"kubernetes.io/projected/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-kube-api-access-vqskm\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.789685 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-console-config\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.789707 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-console-serving-cert\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.789726 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-service-ca\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.891490 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqskm\" (UniqueName: \"kubernetes.io/projected/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-kube-api-access-vqskm\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.891549 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-console-config\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.891579 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-console-serving-cert\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.891612 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-service-ca\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.891663 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-console-oauth-config\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.891709 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-trusted-ca-bundle\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.891767 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-oauth-serving-cert\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.893341 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-oauth-serving-cert\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.894064 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-console-config\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.895542 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-service-ca\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.896072 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-trusted-ca-bundle\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.898562 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-console-oauth-config\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.898836 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-console-serving-cert\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.908905 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqskm\" (UniqueName: \"kubernetes.io/projected/7979f0da-f16f-4e2a-8c1c-a667607ddcf2-kube-api-access-vqskm\") pod \"console-58d484d7c8-xhg4d\" (UID: \"7979f0da-f16f-4e2a-8c1c-a667607ddcf2\") " pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.947795 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-kw56f"] Jan 23 06:34:07 crc kubenswrapper[4784]: W0123 06:34:07.955268 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69d35b69_1071_41f1_ba7c_37f25670f4cb.slice/crio-e9bb594a49fb208cd3be36b73914824d514797ff78ae4a38f9ca48df9ab4e6f1 WatchSource:0}: Error finding container e9bb594a49fb208cd3be36b73914824d514797ff78ae4a38f9ca48df9ab4e6f1: Status 404 returned error can't find the container with id e9bb594a49fb208cd3be36b73914824d514797ff78ae4a38f9ca48df9ab4e6f1 Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.994323 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/21b9c71e-8dc5-41c7-86a3-9d840f155413-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-cjbl9\" (UID: \"21b9c71e-8dc5-41c7-86a3-9d840f155413\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9" Jan 23 06:34:07 crc kubenswrapper[4784]: I0123 06:34:07.999458 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/21b9c71e-8dc5-41c7-86a3-9d840f155413-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-cjbl9\" (UID: \"21b9c71e-8dc5-41c7-86a3-9d840f155413\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9" Jan 23 06:34:08 crc kubenswrapper[4784]: I0123 06:34:08.105840 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:08 crc kubenswrapper[4784]: I0123 06:34:08.199940 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/171baadd-8608-486e-a418-65f76de1cf06-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5tb77\" (UID: \"171baadd-8608-486e-a418-65f76de1cf06\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77" Jan 23 06:34:08 crc kubenswrapper[4784]: I0123 06:34:08.206450 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/171baadd-8608-486e-a418-65f76de1cf06-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5tb77\" (UID: \"171baadd-8608-486e-a418-65f76de1cf06\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77" Jan 23 06:34:08 crc kubenswrapper[4784]: I0123 06:34:08.294458 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9" Jan 23 06:34:08 crc kubenswrapper[4784]: I0123 06:34:08.443651 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-kw56f" event={"ID":"69d35b69-1071-41f1-ba7c-37f25670f4cb","Type":"ContainerStarted","Data":"e9bb594a49fb208cd3be36b73914824d514797ff78ae4a38f9ca48df9ab4e6f1"} Jan 23 06:34:08 crc kubenswrapper[4784]: I0123 06:34:08.446338 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-m77xg" event={"ID":"40a88789-2452-42ca-9b44-14a6614b413c","Type":"ContainerStarted","Data":"bce53958a02f996bb1d17f3722de0742eb7e6531a34ee27928a80d18dab086be"} Jan 23 06:34:08 crc kubenswrapper[4784]: I0123 06:34:08.464979 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77" Jan 23 06:34:08 crc kubenswrapper[4784]: I0123 06:34:08.570285 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9"] Jan 23 06:34:08 crc kubenswrapper[4784]: I0123 06:34:08.582102 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58d484d7c8-xhg4d"] Jan 23 06:34:08 crc kubenswrapper[4784]: W0123 06:34:08.593626 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7979f0da_f16f_4e2a_8c1c_a667607ddcf2.slice/crio-7d1cdb95b09f51319a86d375c3ce3ea74b2e0a991d63d0ccce5c6948e7fc0fc5 WatchSource:0}: Error finding container 7d1cdb95b09f51319a86d375c3ce3ea74b2e0a991d63d0ccce5c6948e7fc0fc5: Status 404 returned error can't find the container with id 7d1cdb95b09f51319a86d375c3ce3ea74b2e0a991d63d0ccce5c6948e7fc0fc5 Jan 23 06:34:08 crc kubenswrapper[4784]: W0123 06:34:08.594792 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21b9c71e_8dc5_41c7_86a3_9d840f155413.slice/crio-6ac8e4069130672a9c998dd3718d55f5eacb43097ef519dfce832060a4f8bc34 WatchSource:0}: Error finding container 6ac8e4069130672a9c998dd3718d55f5eacb43097ef519dfce832060a4f8bc34: Status 404 returned error can't find the container with id 6ac8e4069130672a9c998dd3718d55f5eacb43097ef519dfce832060a4f8bc34 Jan 23 06:34:08 crc kubenswrapper[4784]: I0123 06:34:08.756834 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77"] Jan 23 06:34:08 crc kubenswrapper[4784]: W0123 06:34:08.760378 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod171baadd_8608_486e_a418_65f76de1cf06.slice/crio-f74453dd1a956bacc6c96091072f2b5c685d3c8721019e233aa39165ee1a8171 WatchSource:0}: Error finding container f74453dd1a956bacc6c96091072f2b5c685d3c8721019e233aa39165ee1a8171: Status 404 returned error can't find the container with id f74453dd1a956bacc6c96091072f2b5c685d3c8721019e233aa39165ee1a8171 Jan 23 06:34:09 crc kubenswrapper[4784]: I0123 06:34:09.457821 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9" event={"ID":"21b9c71e-8dc5-41c7-86a3-9d840f155413","Type":"ContainerStarted","Data":"6ac8e4069130672a9c998dd3718d55f5eacb43097ef519dfce832060a4f8bc34"} Jan 23 06:34:09 crc kubenswrapper[4784]: I0123 06:34:09.460608 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77" event={"ID":"171baadd-8608-486e-a418-65f76de1cf06","Type":"ContainerStarted","Data":"f74453dd1a956bacc6c96091072f2b5c685d3c8721019e233aa39165ee1a8171"} Jan 23 06:34:09 crc kubenswrapper[4784]: I0123 06:34:09.462020 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d484d7c8-xhg4d" event={"ID":"7979f0da-f16f-4e2a-8c1c-a667607ddcf2","Type":"ContainerStarted","Data":"643fce47d779e6847ab0e9078cd9cd3469ebdc0475c20eda75ba9e533e566b32"} Jan 23 06:34:09 crc kubenswrapper[4784]: I0123 06:34:09.462050 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d484d7c8-xhg4d" event={"ID":"7979f0da-f16f-4e2a-8c1c-a667607ddcf2","Type":"ContainerStarted","Data":"7d1cdb95b09f51319a86d375c3ce3ea74b2e0a991d63d0ccce5c6948e7fc0fc5"} Jan 23 06:34:11 crc kubenswrapper[4784]: I0123 06:34:11.477227 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-kw56f" event={"ID":"69d35b69-1071-41f1-ba7c-37f25670f4cb","Type":"ContainerStarted","Data":"63c3c49e0edc94988c5c2f82b2ccab4e0df7a28c41877e15c1844d8452ae6953"} Jan 23 06:34:11 crc kubenswrapper[4784]: I0123 06:34:11.479610 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9" event={"ID":"21b9c71e-8dc5-41c7-86a3-9d840f155413","Type":"ContainerStarted","Data":"9a244b9705a075d5692116f59312db485951ac1c6e55102b27d13c466ff35a3c"} Jan 23 06:34:11 crc kubenswrapper[4784]: I0123 06:34:11.479727 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9" Jan 23 06:34:11 crc kubenswrapper[4784]: I0123 06:34:11.481891 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-m77xg" event={"ID":"40a88789-2452-42ca-9b44-14a6614b413c","Type":"ContainerStarted","Data":"8eeadcc43ffe589c4acc7e6ff688b61e554100a3a8823acf76e6a2f7bb8ec435"} Jan 23 06:34:11 crc kubenswrapper[4784]: I0123 06:34:11.482391 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:11 crc kubenswrapper[4784]: I0123 06:34:11.499092 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9" podStartSLOduration=2.040673972 podStartE2EDuration="4.499070531s" podCreationTimestamp="2026-01-23 06:34:07 +0000 UTC" firstStartedPulling="2026-01-23 06:34:08.599297154 +0000 UTC m=+851.831805128" lastFinishedPulling="2026-01-23 06:34:11.057693703 +0000 UTC m=+854.290201687" observedRunningTime="2026-01-23 06:34:11.497710298 +0000 UTC m=+854.730218272" watchObservedRunningTime="2026-01-23 06:34:11.499070531 +0000 UTC m=+854.731578505" Jan 23 06:34:11 crc kubenswrapper[4784]: I0123 06:34:11.503418 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-58d484d7c8-xhg4d" podStartSLOduration=4.503387927 podStartE2EDuration="4.503387927s" podCreationTimestamp="2026-01-23 06:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:34:09.487431551 +0000 UTC m=+852.719939525" watchObservedRunningTime="2026-01-23 06:34:11.503387927 +0000 UTC m=+854.735895901" Jan 23 06:34:11 crc kubenswrapper[4784]: I0123 06:34:11.532607 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-m77xg" podStartSLOduration=1.324063749 podStartE2EDuration="4.532575073s" podCreationTimestamp="2026-01-23 06:34:07 +0000 UTC" firstStartedPulling="2026-01-23 06:34:07.811645406 +0000 UTC m=+851.044153380" lastFinishedPulling="2026-01-23 06:34:11.02015669 +0000 UTC m=+854.252664704" observedRunningTime="2026-01-23 06:34:11.517591366 +0000 UTC m=+854.750099340" watchObservedRunningTime="2026-01-23 06:34:11.532575073 +0000 UTC m=+854.765083047" Jan 23 06:34:12 crc kubenswrapper[4784]: I0123 06:34:12.494812 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77" event={"ID":"171baadd-8608-486e-a418-65f76de1cf06","Type":"ContainerStarted","Data":"4232f3d749650dcd9db8ed4e12d63c9a91852381acbe42f1234db5e0be8c9f7d"} Jan 23 06:34:15 crc kubenswrapper[4784]: I0123 06:34:15.533954 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-kw56f" event={"ID":"69d35b69-1071-41f1-ba7c-37f25670f4cb","Type":"ContainerStarted","Data":"546aff6138fd890d81be9bceb5974bdfc54026a40a9a1d089240a956fbda963d"} Jan 23 06:34:15 crc kubenswrapper[4784]: I0123 06:34:15.555013 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-kw56f" podStartSLOduration=2.035640537 podStartE2EDuration="8.55496951s" podCreationTimestamp="2026-01-23 06:34:07 +0000 UTC" firstStartedPulling="2026-01-23 06:34:07.959222233 +0000 UTC m=+851.191730207" lastFinishedPulling="2026-01-23 06:34:14.478551166 +0000 UTC m=+857.711059180" observedRunningTime="2026-01-23 06:34:15.553888984 +0000 UTC m=+858.786397008" watchObservedRunningTime="2026-01-23 06:34:15.55496951 +0000 UTC m=+858.787477514" Jan 23 06:34:15 crc kubenswrapper[4784]: I0123 06:34:15.558692 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5tb77" podStartSLOduration=5.209243533 podStartE2EDuration="8.558672501s" podCreationTimestamp="2026-01-23 06:34:07 +0000 UTC" firstStartedPulling="2026-01-23 06:34:08.775376681 +0000 UTC m=+852.007884655" lastFinishedPulling="2026-01-23 06:34:12.124805649 +0000 UTC m=+855.357313623" observedRunningTime="2026-01-23 06:34:12.520380181 +0000 UTC m=+855.752888155" watchObservedRunningTime="2026-01-23 06:34:15.558672501 +0000 UTC m=+858.791180505" Jan 23 06:34:17 crc kubenswrapper[4784]: I0123 06:34:17.767164 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-m77xg" Jan 23 06:34:18 crc kubenswrapper[4784]: I0123 06:34:18.106873 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:18 crc kubenswrapper[4784]: I0123 06:34:18.106963 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:18 crc kubenswrapper[4784]: I0123 06:34:18.114157 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:18 crc kubenswrapper[4784]: I0123 06:34:18.563137 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 06:34:18 crc kubenswrapper[4784]: I0123 06:34:18.627447 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2stcb"] Jan 23 06:34:28 crc kubenswrapper[4784]: I0123 06:34:28.305449 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-cjbl9" Jan 23 06:34:43 crc kubenswrapper[4784]: I0123 06:34:43.679937 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-2stcb" podUID="b6c8a935-b603-40f3-8051-c705e23c20f3" containerName="console" containerID="cri-o://4731e6c21064788a257b5c1b044b1d035d18e1063df4a71aec4a44d863f42d2b" gracePeriod=15 Jan 23 06:34:43 crc kubenswrapper[4784]: I0123 06:34:43.909571 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2stcb_b6c8a935-b603-40f3-8051-c705e23c20f3/console/0.log" Jan 23 06:34:43 crc kubenswrapper[4784]: I0123 06:34:43.909983 4784 generic.go:334] "Generic (PLEG): container finished" podID="b6c8a935-b603-40f3-8051-c705e23c20f3" containerID="4731e6c21064788a257b5c1b044b1d035d18e1063df4a71aec4a44d863f42d2b" exitCode=2 Jan 23 06:34:43 crc kubenswrapper[4784]: I0123 06:34:43.910049 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2stcb" event={"ID":"b6c8a935-b603-40f3-8051-c705e23c20f3","Type":"ContainerDied","Data":"4731e6c21064788a257b5c1b044b1d035d18e1063df4a71aec4a44d863f42d2b"} Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.139381 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2stcb_b6c8a935-b603-40f3-8051-c705e23c20f3/console/0.log" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.139473 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.271949 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-trusted-ca-bundle\") pod \"b6c8a935-b603-40f3-8051-c705e23c20f3\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.272444 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c8a935-b603-40f3-8051-c705e23c20f3-console-serving-cert\") pod \"b6c8a935-b603-40f3-8051-c705e23c20f3\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.272482 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtkc8\" (UniqueName: \"kubernetes.io/projected/b6c8a935-b603-40f3-8051-c705e23c20f3-kube-api-access-rtkc8\") pod \"b6c8a935-b603-40f3-8051-c705e23c20f3\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.272522 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-service-ca\") pod \"b6c8a935-b603-40f3-8051-c705e23c20f3\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.272560 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-oauth-serving-cert\") pod \"b6c8a935-b603-40f3-8051-c705e23c20f3\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.272584 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-console-config\") pod \"b6c8a935-b603-40f3-8051-c705e23c20f3\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.272648 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6c8a935-b603-40f3-8051-c705e23c20f3-console-oauth-config\") pod \"b6c8a935-b603-40f3-8051-c705e23c20f3\" (UID: \"b6c8a935-b603-40f3-8051-c705e23c20f3\") " Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.277408 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b6c8a935-b603-40f3-8051-c705e23c20f3" (UID: "b6c8a935-b603-40f3-8051-c705e23c20f3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.277429 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b6c8a935-b603-40f3-8051-c705e23c20f3" (UID: "b6c8a935-b603-40f3-8051-c705e23c20f3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.277877 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-service-ca" (OuterVolumeSpecName: "service-ca") pod "b6c8a935-b603-40f3-8051-c705e23c20f3" (UID: "b6c8a935-b603-40f3-8051-c705e23c20f3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.278311 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-console-config" (OuterVolumeSpecName: "console-config") pod "b6c8a935-b603-40f3-8051-c705e23c20f3" (UID: "b6c8a935-b603-40f3-8051-c705e23c20f3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.280776 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c8a935-b603-40f3-8051-c705e23c20f3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b6c8a935-b603-40f3-8051-c705e23c20f3" (UID: "b6c8a935-b603-40f3-8051-c705e23c20f3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.281105 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c8a935-b603-40f3-8051-c705e23c20f3-kube-api-access-rtkc8" (OuterVolumeSpecName: "kube-api-access-rtkc8") pod "b6c8a935-b603-40f3-8051-c705e23c20f3" (UID: "b6c8a935-b603-40f3-8051-c705e23c20f3"). InnerVolumeSpecName "kube-api-access-rtkc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.283310 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c8a935-b603-40f3-8051-c705e23c20f3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b6c8a935-b603-40f3-8051-c705e23c20f3" (UID: "b6c8a935-b603-40f3-8051-c705e23c20f3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.377145 4784 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.377850 4784 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.377863 4784 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.377876 4784 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6c8a935-b603-40f3-8051-c705e23c20f3-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.377886 4784 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6c8a935-b603-40f3-8051-c705e23c20f3-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.377894 4784 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c8a935-b603-40f3-8051-c705e23c20f3-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.377904 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtkc8\" (UniqueName: \"kubernetes.io/projected/b6c8a935-b603-40f3-8051-c705e23c20f3-kube-api-access-rtkc8\") on node \"crc\" DevicePath \"\"" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.920564 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2stcb_b6c8a935-b603-40f3-8051-c705e23c20f3/console/0.log" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.920632 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2stcb" event={"ID":"b6c8a935-b603-40f3-8051-c705e23c20f3","Type":"ContainerDied","Data":"b981e5830f1a64fd52a46763c4df01a51479a325a7c9b3ad159b741c20ed218d"} Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.920683 4784 scope.go:117] "RemoveContainer" containerID="4731e6c21064788a257b5c1b044b1d035d18e1063df4a71aec4a44d863f42d2b" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.920792 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2stcb" Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.957226 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2stcb"] Jan 23 06:34:44 crc kubenswrapper[4784]: I0123 06:34:44.962328 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-2stcb"] Jan 23 06:34:45 crc kubenswrapper[4784]: I0123 06:34:45.279588 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c8a935-b603-40f3-8051-c705e23c20f3" path="/var/lib/kubelet/pods/b6c8a935-b603-40f3-8051-c705e23c20f3/volumes" Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.164190 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj"] Jan 23 06:34:47 crc kubenswrapper[4784]: E0123 06:34:47.164868 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c8a935-b603-40f3-8051-c705e23c20f3" containerName="console" Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.164882 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c8a935-b603-40f3-8051-c705e23c20f3" containerName="console" Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.165014 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c8a935-b603-40f3-8051-c705e23c20f3" containerName="console" Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.166166 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.170798 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.177109 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt27d\" (UniqueName: \"kubernetes.io/projected/b0d55601-449e-4c1e-a99c-bbe643195ad1-kube-api-access-tt27d\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj\" (UID: \"b0d55601-449e-4c1e-a99c-bbe643195ad1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.177183 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b0d55601-449e-4c1e-a99c-bbe643195ad1-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj\" (UID: \"b0d55601-449e-4c1e-a99c-bbe643195ad1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.177271 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b0d55601-449e-4c1e-a99c-bbe643195ad1-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj\" (UID: \"b0d55601-449e-4c1e-a99c-bbe643195ad1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.198041 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj"] Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.278862 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b0d55601-449e-4c1e-a99c-bbe643195ad1-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj\" (UID: \"b0d55601-449e-4c1e-a99c-bbe643195ad1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.279294 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt27d\" (UniqueName: \"kubernetes.io/projected/b0d55601-449e-4c1e-a99c-bbe643195ad1-kube-api-access-tt27d\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj\" (UID: \"b0d55601-449e-4c1e-a99c-bbe643195ad1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.279463 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b0d55601-449e-4c1e-a99c-bbe643195ad1-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj\" (UID: \"b0d55601-449e-4c1e-a99c-bbe643195ad1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.279679 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b0d55601-449e-4c1e-a99c-bbe643195ad1-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj\" (UID: \"b0d55601-449e-4c1e-a99c-bbe643195ad1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.280137 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b0d55601-449e-4c1e-a99c-bbe643195ad1-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj\" (UID: \"b0d55601-449e-4c1e-a99c-bbe643195ad1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.300125 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt27d\" (UniqueName: \"kubernetes.io/projected/b0d55601-449e-4c1e-a99c-bbe643195ad1-kube-api-access-tt27d\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj\" (UID: \"b0d55601-449e-4c1e-a99c-bbe643195ad1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" Jan 23 06:34:47 crc kubenswrapper[4784]: I0123 06:34:47.487842 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" Jan 23 06:34:48 crc kubenswrapper[4784]: I0123 06:34:48.213321 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj"] Jan 23 06:34:49 crc kubenswrapper[4784]: I0123 06:34:49.049440 4784 generic.go:334] "Generic (PLEG): container finished" podID="b0d55601-449e-4c1e-a99c-bbe643195ad1" containerID="60523312acd1b083dd7c3fb995739aae33eef1a368564946cd8642f2d64fde73" exitCode=0 Jan 23 06:34:49 crc kubenswrapper[4784]: I0123 06:34:49.049562 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" event={"ID":"b0d55601-449e-4c1e-a99c-bbe643195ad1","Type":"ContainerDied","Data":"60523312acd1b083dd7c3fb995739aae33eef1a368564946cd8642f2d64fde73"} Jan 23 06:34:49 crc kubenswrapper[4784]: I0123 06:34:49.049984 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" event={"ID":"b0d55601-449e-4c1e-a99c-bbe643195ad1","Type":"ContainerStarted","Data":"0e172f79bbb093c6c6b3e63c095951233c7921882ddb86e7eef827cc35307764"} Jan 23 06:34:51 crc kubenswrapper[4784]: I0123 06:34:51.068983 4784 generic.go:334] "Generic (PLEG): container finished" podID="b0d55601-449e-4c1e-a99c-bbe643195ad1" containerID="f0d3f623b4151b18b12291a10dfbf6108743b8f640386c755d83b3b8393f93c6" exitCode=0 Jan 23 06:34:51 crc kubenswrapper[4784]: I0123 06:34:51.069037 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" event={"ID":"b0d55601-449e-4c1e-a99c-bbe643195ad1","Type":"ContainerDied","Data":"f0d3f623b4151b18b12291a10dfbf6108743b8f640386c755d83b3b8393f93c6"} Jan 23 06:34:52 crc kubenswrapper[4784]: I0123 06:34:52.080329 4784 generic.go:334] "Generic (PLEG): container finished" podID="b0d55601-449e-4c1e-a99c-bbe643195ad1" containerID="e93e22e099184411a94216ac147b7431b11683a99fb355d18c4e17cf412ac4f4" exitCode=0 Jan 23 06:34:52 crc kubenswrapper[4784]: I0123 06:34:52.080407 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" event={"ID":"b0d55601-449e-4c1e-a99c-bbe643195ad1","Type":"ContainerDied","Data":"e93e22e099184411a94216ac147b7431b11683a99fb355d18c4e17cf412ac4f4"} Jan 23 06:34:53 crc kubenswrapper[4784]: I0123 06:34:53.378190 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" Jan 23 06:34:53 crc kubenswrapper[4784]: I0123 06:34:53.546584 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b0d55601-449e-4c1e-a99c-bbe643195ad1-bundle\") pod \"b0d55601-449e-4c1e-a99c-bbe643195ad1\" (UID: \"b0d55601-449e-4c1e-a99c-bbe643195ad1\") " Jan 23 06:34:53 crc kubenswrapper[4784]: I0123 06:34:53.546696 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b0d55601-449e-4c1e-a99c-bbe643195ad1-util\") pod \"b0d55601-449e-4c1e-a99c-bbe643195ad1\" (UID: \"b0d55601-449e-4c1e-a99c-bbe643195ad1\") " Jan 23 06:34:53 crc kubenswrapper[4784]: I0123 06:34:53.546743 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt27d\" (UniqueName: \"kubernetes.io/projected/b0d55601-449e-4c1e-a99c-bbe643195ad1-kube-api-access-tt27d\") pod \"b0d55601-449e-4c1e-a99c-bbe643195ad1\" (UID: \"b0d55601-449e-4c1e-a99c-bbe643195ad1\") " Jan 23 06:34:53 crc kubenswrapper[4784]: I0123 06:34:53.549488 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d55601-449e-4c1e-a99c-bbe643195ad1-bundle" (OuterVolumeSpecName: "bundle") pod "b0d55601-449e-4c1e-a99c-bbe643195ad1" (UID: "b0d55601-449e-4c1e-a99c-bbe643195ad1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:34:53 crc kubenswrapper[4784]: I0123 06:34:53.553332 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d55601-449e-4c1e-a99c-bbe643195ad1-kube-api-access-tt27d" (OuterVolumeSpecName: "kube-api-access-tt27d") pod "b0d55601-449e-4c1e-a99c-bbe643195ad1" (UID: "b0d55601-449e-4c1e-a99c-bbe643195ad1"). InnerVolumeSpecName "kube-api-access-tt27d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:34:53 crc kubenswrapper[4784]: I0123 06:34:53.648403 4784 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b0d55601-449e-4c1e-a99c-bbe643195ad1-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:34:53 crc kubenswrapper[4784]: I0123 06:34:53.648450 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt27d\" (UniqueName: \"kubernetes.io/projected/b0d55601-449e-4c1e-a99c-bbe643195ad1-kube-api-access-tt27d\") on node \"crc\" DevicePath \"\"" Jan 23 06:34:53 crc kubenswrapper[4784]: I0123 06:34:53.782537 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d55601-449e-4c1e-a99c-bbe643195ad1-util" (OuterVolumeSpecName: "util") pod "b0d55601-449e-4c1e-a99c-bbe643195ad1" (UID: "b0d55601-449e-4c1e-a99c-bbe643195ad1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:34:53 crc kubenswrapper[4784]: I0123 06:34:53.851610 4784 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b0d55601-449e-4c1e-a99c-bbe643195ad1-util\") on node \"crc\" DevicePath \"\"" Jan 23 06:34:54 crc kubenswrapper[4784]: I0123 06:34:54.098770 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" event={"ID":"b0d55601-449e-4c1e-a99c-bbe643195ad1","Type":"ContainerDied","Data":"0e172f79bbb093c6c6b3e63c095951233c7921882ddb86e7eef827cc35307764"} Jan 23 06:34:54 crc kubenswrapper[4784]: I0123 06:34:54.098844 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e172f79bbb093c6c6b3e63c095951233c7921882ddb86e7eef827cc35307764" Jan 23 06:34:54 crc kubenswrapper[4784]: I0123 06:34:54.098902 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.344443 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8589677cff-dzl65"] Jan 23 06:35:02 crc kubenswrapper[4784]: E0123 06:35:02.345422 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d55601-449e-4c1e-a99c-bbe643195ad1" containerName="util" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.345437 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d55601-449e-4c1e-a99c-bbe643195ad1" containerName="util" Jan 23 06:35:02 crc kubenswrapper[4784]: E0123 06:35:02.345449 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d55601-449e-4c1e-a99c-bbe643195ad1" containerName="extract" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.345455 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d55601-449e-4c1e-a99c-bbe643195ad1" containerName="extract" Jan 23 06:35:02 crc kubenswrapper[4784]: E0123 06:35:02.345480 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d55601-449e-4c1e-a99c-bbe643195ad1" containerName="pull" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.345487 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d55601-449e-4c1e-a99c-bbe643195ad1" containerName="pull" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.345620 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d55601-449e-4c1e-a99c-bbe643195ad1" containerName="extract" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.346150 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.349985 4784 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.349985 4784 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-2q55r" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.350129 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.353484 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.353521 4784 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.392280 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8589677cff-dzl65"] Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.492129 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47ec951f-c0f2-40f8-9361-6ca608819c25-webhook-cert\") pod \"metallb-operator-controller-manager-8589677cff-dzl65\" (UID: \"47ec951f-c0f2-40f8-9361-6ca608819c25\") " pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.492225 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl5pf\" (UniqueName: \"kubernetes.io/projected/47ec951f-c0f2-40f8-9361-6ca608819c25-kube-api-access-cl5pf\") pod \"metallb-operator-controller-manager-8589677cff-dzl65\" (UID: \"47ec951f-c0f2-40f8-9361-6ca608819c25\") " pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.492252 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47ec951f-c0f2-40f8-9361-6ca608819c25-apiservice-cert\") pod \"metallb-operator-controller-manager-8589677cff-dzl65\" (UID: \"47ec951f-c0f2-40f8-9361-6ca608819c25\") " pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.593941 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47ec951f-c0f2-40f8-9361-6ca608819c25-webhook-cert\") pod \"metallb-operator-controller-manager-8589677cff-dzl65\" (UID: \"47ec951f-c0f2-40f8-9361-6ca608819c25\") " pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.594033 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl5pf\" (UniqueName: \"kubernetes.io/projected/47ec951f-c0f2-40f8-9361-6ca608819c25-kube-api-access-cl5pf\") pod \"metallb-operator-controller-manager-8589677cff-dzl65\" (UID: \"47ec951f-c0f2-40f8-9361-6ca608819c25\") " pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.594061 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47ec951f-c0f2-40f8-9361-6ca608819c25-apiservice-cert\") pod \"metallb-operator-controller-manager-8589677cff-dzl65\" (UID: \"47ec951f-c0f2-40f8-9361-6ca608819c25\") " pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.603359 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47ec951f-c0f2-40f8-9361-6ca608819c25-apiservice-cert\") pod \"metallb-operator-controller-manager-8589677cff-dzl65\" (UID: \"47ec951f-c0f2-40f8-9361-6ca608819c25\") " pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.603927 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47ec951f-c0f2-40f8-9361-6ca608819c25-webhook-cert\") pod \"metallb-operator-controller-manager-8589677cff-dzl65\" (UID: \"47ec951f-c0f2-40f8-9361-6ca608819c25\") " pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.623532 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl5pf\" (UniqueName: \"kubernetes.io/projected/47ec951f-c0f2-40f8-9361-6ca608819c25-kube-api-access-cl5pf\") pod \"metallb-operator-controller-manager-8589677cff-dzl65\" (UID: \"47ec951f-c0f2-40f8-9361-6ca608819c25\") " pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.683357 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.702412 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj"] Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.704498 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.708065 4784 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.708137 4784 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.708914 4784 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-x4njr" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.740333 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj"] Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.798550 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5207d75f-f4c3-4c7d-861b-5f30efec8c5f-webhook-cert\") pod \"metallb-operator-webhook-server-59c99db6cd-6k4nj\" (UID: \"5207d75f-f4c3-4c7d-861b-5f30efec8c5f\") " pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.798686 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g67x\" (UniqueName: \"kubernetes.io/projected/5207d75f-f4c3-4c7d-861b-5f30efec8c5f-kube-api-access-5g67x\") pod \"metallb-operator-webhook-server-59c99db6cd-6k4nj\" (UID: \"5207d75f-f4c3-4c7d-861b-5f30efec8c5f\") " pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.798735 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5207d75f-f4c3-4c7d-861b-5f30efec8c5f-apiservice-cert\") pod \"metallb-operator-webhook-server-59c99db6cd-6k4nj\" (UID: \"5207d75f-f4c3-4c7d-861b-5f30efec8c5f\") " pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.900401 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g67x\" (UniqueName: \"kubernetes.io/projected/5207d75f-f4c3-4c7d-861b-5f30efec8c5f-kube-api-access-5g67x\") pod \"metallb-operator-webhook-server-59c99db6cd-6k4nj\" (UID: \"5207d75f-f4c3-4c7d-861b-5f30efec8c5f\") " pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.900469 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5207d75f-f4c3-4c7d-861b-5f30efec8c5f-apiservice-cert\") pod \"metallb-operator-webhook-server-59c99db6cd-6k4nj\" (UID: \"5207d75f-f4c3-4c7d-861b-5f30efec8c5f\") " pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.900506 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5207d75f-f4c3-4c7d-861b-5f30efec8c5f-webhook-cert\") pod \"metallb-operator-webhook-server-59c99db6cd-6k4nj\" (UID: \"5207d75f-f4c3-4c7d-861b-5f30efec8c5f\") " pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.906171 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5207d75f-f4c3-4c7d-861b-5f30efec8c5f-apiservice-cert\") pod \"metallb-operator-webhook-server-59c99db6cd-6k4nj\" (UID: \"5207d75f-f4c3-4c7d-861b-5f30efec8c5f\") " pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.914335 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5207d75f-f4c3-4c7d-861b-5f30efec8c5f-webhook-cert\") pod \"metallb-operator-webhook-server-59c99db6cd-6k4nj\" (UID: \"5207d75f-f4c3-4c7d-861b-5f30efec8c5f\") " pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" Jan 23 06:35:02 crc kubenswrapper[4784]: I0123 06:35:02.927157 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g67x\" (UniqueName: \"kubernetes.io/projected/5207d75f-f4c3-4c7d-861b-5f30efec8c5f-kube-api-access-5g67x\") pod \"metallb-operator-webhook-server-59c99db6cd-6k4nj\" (UID: \"5207d75f-f4c3-4c7d-861b-5f30efec8c5f\") " pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" Jan 23 06:35:03 crc kubenswrapper[4784]: I0123 06:35:03.073915 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" Jan 23 06:35:03 crc kubenswrapper[4784]: I0123 06:35:03.166786 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8589677cff-dzl65"] Jan 23 06:35:03 crc kubenswrapper[4784]: I0123 06:35:03.339426 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj"] Jan 23 06:35:03 crc kubenswrapper[4784]: W0123 06:35:03.343347 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5207d75f_f4c3_4c7d_861b_5f30efec8c5f.slice/crio-4df3733e7292d6f6fb301c37b3378a135c2651bc4202f907f8c53b8de8544e13 WatchSource:0}: Error finding container 4df3733e7292d6f6fb301c37b3378a135c2651bc4202f907f8c53b8de8544e13: Status 404 returned error can't find the container with id 4df3733e7292d6f6fb301c37b3378a135c2651bc4202f907f8c53b8de8544e13 Jan 23 06:35:04 crc kubenswrapper[4784]: I0123 06:35:04.186489 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" event={"ID":"5207d75f-f4c3-4c7d-861b-5f30efec8c5f","Type":"ContainerStarted","Data":"4df3733e7292d6f6fb301c37b3378a135c2651bc4202f907f8c53b8de8544e13"} Jan 23 06:35:04 crc kubenswrapper[4784]: I0123 06:35:04.188075 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" event={"ID":"47ec951f-c0f2-40f8-9361-6ca608819c25","Type":"ContainerStarted","Data":"3e4d8b894b34dfad3511cd1631fe340d60a8e0f20aeff6032f9d5eef8b3e9816"} Jan 23 06:35:13 crc kubenswrapper[4784]: I0123 06:35:13.286387 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" event={"ID":"47ec951f-c0f2-40f8-9361-6ca608819c25","Type":"ContainerStarted","Data":"d220f20ef16a6c603cdfef64326ddf3dd1395757b1f55e7645e66f19ffe8b95c"} Jan 23 06:35:13 crc kubenswrapper[4784]: I0123 06:35:13.287227 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 06:35:13 crc kubenswrapper[4784]: I0123 06:35:13.288475 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" event={"ID":"5207d75f-f4c3-4c7d-861b-5f30efec8c5f","Type":"ContainerStarted","Data":"9e54cf50bc4086c167b2dd4f16f504c1813ae8fe94ba79a7d5a4a0c7013cbcdf"} Jan 23 06:35:13 crc kubenswrapper[4784]: I0123 06:35:13.288540 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" Jan 23 06:35:13 crc kubenswrapper[4784]: I0123 06:35:13.332956 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" podStartSLOduration=1.938310099 podStartE2EDuration="11.332932796s" podCreationTimestamp="2026-01-23 06:35:02 +0000 UTC" firstStartedPulling="2026-01-23 06:35:03.181960531 +0000 UTC m=+906.414468505" lastFinishedPulling="2026-01-23 06:35:12.576583228 +0000 UTC m=+915.809091202" observedRunningTime="2026-01-23 06:35:13.331539202 +0000 UTC m=+916.564047176" watchObservedRunningTime="2026-01-23 06:35:13.332932796 +0000 UTC m=+916.565440770" Jan 23 06:35:13 crc kubenswrapper[4784]: I0123 06:35:13.359122 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" podStartSLOduration=2.109319882 podStartE2EDuration="11.35910561s" podCreationTimestamp="2026-01-23 06:35:02 +0000 UTC" firstStartedPulling="2026-01-23 06:35:03.347241713 +0000 UTC m=+906.579749687" lastFinishedPulling="2026-01-23 06:35:12.597027441 +0000 UTC m=+915.829535415" observedRunningTime="2026-01-23 06:35:13.35709258 +0000 UTC m=+916.589600554" watchObservedRunningTime="2026-01-23 06:35:13.35910561 +0000 UTC m=+916.591613584" Jan 23 06:35:23 crc kubenswrapper[4784]: I0123 06:35:23.084188 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-59c99db6cd-6k4nj" Jan 23 06:35:23 crc kubenswrapper[4784]: I0123 06:35:23.603246 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:35:23 crc kubenswrapper[4784]: I0123 06:35:23.603340 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:35:36 crc kubenswrapper[4784]: I0123 06:35:36.353997 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bpw4g"] Jan 23 06:35:36 crc kubenswrapper[4784]: I0123 06:35:36.359120 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bpw4g" Jan 23 06:35:36 crc kubenswrapper[4784]: I0123 06:35:36.543819 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d8177c-add0-4980-ae22-44a0ede0a599-utilities\") pod \"certified-operators-bpw4g\" (UID: \"d0d8177c-add0-4980-ae22-44a0ede0a599\") " pod="openshift-marketplace/certified-operators-bpw4g" Jan 23 06:35:36 crc kubenswrapper[4784]: I0123 06:35:36.543927 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72hd6\" (UniqueName: \"kubernetes.io/projected/d0d8177c-add0-4980-ae22-44a0ede0a599-kube-api-access-72hd6\") pod \"certified-operators-bpw4g\" (UID: \"d0d8177c-add0-4980-ae22-44a0ede0a599\") " pod="openshift-marketplace/certified-operators-bpw4g" Jan 23 06:35:36 crc kubenswrapper[4784]: I0123 06:35:36.543959 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d8177c-add0-4980-ae22-44a0ede0a599-catalog-content\") pod \"certified-operators-bpw4g\" (UID: \"d0d8177c-add0-4980-ae22-44a0ede0a599\") " pod="openshift-marketplace/certified-operators-bpw4g" Jan 23 06:35:36 crc kubenswrapper[4784]: I0123 06:35:36.612259 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bpw4g"] Jan 23 06:35:36 crc kubenswrapper[4784]: I0123 06:35:36.645501 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72hd6\" (UniqueName: \"kubernetes.io/projected/d0d8177c-add0-4980-ae22-44a0ede0a599-kube-api-access-72hd6\") pod \"certified-operators-bpw4g\" (UID: \"d0d8177c-add0-4980-ae22-44a0ede0a599\") " pod="openshift-marketplace/certified-operators-bpw4g" Jan 23 06:35:36 crc kubenswrapper[4784]: I0123 06:35:36.645569 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d8177c-add0-4980-ae22-44a0ede0a599-catalog-content\") pod \"certified-operators-bpw4g\" (UID: \"d0d8177c-add0-4980-ae22-44a0ede0a599\") " pod="openshift-marketplace/certified-operators-bpw4g" Jan 23 06:35:36 crc kubenswrapper[4784]: I0123 06:35:36.645636 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d8177c-add0-4980-ae22-44a0ede0a599-utilities\") pod \"certified-operators-bpw4g\" (UID: \"d0d8177c-add0-4980-ae22-44a0ede0a599\") " pod="openshift-marketplace/certified-operators-bpw4g" Jan 23 06:35:36 crc kubenswrapper[4784]: I0123 06:35:36.646297 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d8177c-add0-4980-ae22-44a0ede0a599-utilities\") pod \"certified-operators-bpw4g\" (UID: \"d0d8177c-add0-4980-ae22-44a0ede0a599\") " pod="openshift-marketplace/certified-operators-bpw4g" Jan 23 06:35:36 crc kubenswrapper[4784]: I0123 06:35:36.646407 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d8177c-add0-4980-ae22-44a0ede0a599-catalog-content\") pod \"certified-operators-bpw4g\" (UID: \"d0d8177c-add0-4980-ae22-44a0ede0a599\") " pod="openshift-marketplace/certified-operators-bpw4g" Jan 23 06:35:36 crc kubenswrapper[4784]: I0123 06:35:36.675855 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72hd6\" (UniqueName: \"kubernetes.io/projected/d0d8177c-add0-4980-ae22-44a0ede0a599-kube-api-access-72hd6\") pod \"certified-operators-bpw4g\" (UID: \"d0d8177c-add0-4980-ae22-44a0ede0a599\") " pod="openshift-marketplace/certified-operators-bpw4g" Jan 23 06:35:36 crc kubenswrapper[4784]: I0123 06:35:36.874517 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bpw4g" Jan 23 06:35:37 crc kubenswrapper[4784]: I0123 06:35:37.387680 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bpw4g"] Jan 23 06:35:37 crc kubenswrapper[4784]: I0123 06:35:37.580225 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpw4g" event={"ID":"d0d8177c-add0-4980-ae22-44a0ede0a599","Type":"ContainerStarted","Data":"986313ea2d60549a82895f4c7b2f7d8af47e6c6cb087dea7acf4c1ccc50a4ea2"} Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.329951 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6mzfw"] Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.331836 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.349912 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mzfw"] Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.375701 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvcxw\" (UniqueName: \"kubernetes.io/projected/d722da2b-9410-4157-bebf-f1d717bdf91d-kube-api-access-dvcxw\") pod \"redhat-marketplace-6mzfw\" (UID: \"d722da2b-9410-4157-bebf-f1d717bdf91d\") " pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.375797 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d722da2b-9410-4157-bebf-f1d717bdf91d-catalog-content\") pod \"redhat-marketplace-6mzfw\" (UID: \"d722da2b-9410-4157-bebf-f1d717bdf91d\") " pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.375962 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d722da2b-9410-4157-bebf-f1d717bdf91d-utilities\") pod \"redhat-marketplace-6mzfw\" (UID: \"d722da2b-9410-4157-bebf-f1d717bdf91d\") " pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.477091 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvcxw\" (UniqueName: \"kubernetes.io/projected/d722da2b-9410-4157-bebf-f1d717bdf91d-kube-api-access-dvcxw\") pod \"redhat-marketplace-6mzfw\" (UID: \"d722da2b-9410-4157-bebf-f1d717bdf91d\") " pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.477179 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d722da2b-9410-4157-bebf-f1d717bdf91d-catalog-content\") pod \"redhat-marketplace-6mzfw\" (UID: \"d722da2b-9410-4157-bebf-f1d717bdf91d\") " pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.477218 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d722da2b-9410-4157-bebf-f1d717bdf91d-utilities\") pod \"redhat-marketplace-6mzfw\" (UID: \"d722da2b-9410-4157-bebf-f1d717bdf91d\") " pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.477834 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d722da2b-9410-4157-bebf-f1d717bdf91d-utilities\") pod \"redhat-marketplace-6mzfw\" (UID: \"d722da2b-9410-4157-bebf-f1d717bdf91d\") " pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.478019 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d722da2b-9410-4157-bebf-f1d717bdf91d-catalog-content\") pod \"redhat-marketplace-6mzfw\" (UID: \"d722da2b-9410-4157-bebf-f1d717bdf91d\") " pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.501902 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvcxw\" (UniqueName: \"kubernetes.io/projected/d722da2b-9410-4157-bebf-f1d717bdf91d-kube-api-access-dvcxw\") pod \"redhat-marketplace-6mzfw\" (UID: \"d722da2b-9410-4157-bebf-f1d717bdf91d\") " pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.589308 4784 generic.go:334] "Generic (PLEG): container finished" podID="d0d8177c-add0-4980-ae22-44a0ede0a599" containerID="ee9f02c22d79fdbc9950528193ad536f347e7dc4af5636f62a39ee710354f4f3" exitCode=0 Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.589381 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpw4g" event={"ID":"d0d8177c-add0-4980-ae22-44a0ede0a599","Type":"ContainerDied","Data":"ee9f02c22d79fdbc9950528193ad536f347e7dc4af5636f62a39ee710354f4f3"} Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.660163 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:38 crc kubenswrapper[4784]: I0123 06:35:38.910831 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mzfw"] Jan 23 06:35:39 crc kubenswrapper[4784]: I0123 06:35:39.598211 4784 generic.go:334] "Generic (PLEG): container finished" podID="d722da2b-9410-4157-bebf-f1d717bdf91d" containerID="2e97711c27ad0d7eb2407f289ddf2f2601b7cd01f27d557f24f04350d2c64e75" exitCode=0 Jan 23 06:35:39 crc kubenswrapper[4784]: I0123 06:35:39.598255 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mzfw" event={"ID":"d722da2b-9410-4157-bebf-f1d717bdf91d","Type":"ContainerDied","Data":"2e97711c27ad0d7eb2407f289ddf2f2601b7cd01f27d557f24f04350d2c64e75"} Jan 23 06:35:39 crc kubenswrapper[4784]: I0123 06:35:39.598281 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mzfw" event={"ID":"d722da2b-9410-4157-bebf-f1d717bdf91d","Type":"ContainerStarted","Data":"2336b3909a59cd0f75010823d5c8049b6f21cc4b9f12e918b347bd0e0f2cc86e"} Jan 23 06:35:41 crc kubenswrapper[4784]: I0123 06:35:41.618086 4784 generic.go:334] "Generic (PLEG): container finished" podID="d722da2b-9410-4157-bebf-f1d717bdf91d" containerID="960f2dc25a42e98254062217226244025bbc4aa1eccb0d96ec65503b6843d858" exitCode=0 Jan 23 06:35:41 crc kubenswrapper[4784]: I0123 06:35:41.618425 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mzfw" event={"ID":"d722da2b-9410-4157-bebf-f1d717bdf91d","Type":"ContainerDied","Data":"960f2dc25a42e98254062217226244025bbc4aa1eccb0d96ec65503b6843d858"} Jan 23 06:35:42 crc kubenswrapper[4784]: I0123 06:35:42.628972 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mzfw" event={"ID":"d722da2b-9410-4157-bebf-f1d717bdf91d","Type":"ContainerStarted","Data":"23942d32826e2de38b897f9efa7ff68f861d9c9d607d80d30b307684390dad7e"} Jan 23 06:35:42 crc kubenswrapper[4784]: I0123 06:35:42.663432 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6mzfw" podStartSLOduration=2.170600466 podStartE2EDuration="4.66341282s" podCreationTimestamp="2026-01-23 06:35:38 +0000 UTC" firstStartedPulling="2026-01-23 06:35:39.600342991 +0000 UTC m=+942.832851005" lastFinishedPulling="2026-01-23 06:35:42.093155385 +0000 UTC m=+945.325663359" observedRunningTime="2026-01-23 06:35:42.65563116 +0000 UTC m=+945.888139154" watchObservedRunningTime="2026-01-23 06:35:42.66341282 +0000 UTC m=+945.895920794" Jan 23 06:35:42 crc kubenswrapper[4784]: I0123 06:35:42.688539 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.026508 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hl98f"] Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.028127 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.057040 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hl98f"] Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.176228 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92lzg\" (UniqueName: \"kubernetes.io/projected/d94be017-d632-422d-b5c7-be0029481b02-kube-api-access-92lzg\") pod \"community-operators-hl98f\" (UID: \"d94be017-d632-422d-b5c7-be0029481b02\") " pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.176299 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94be017-d632-422d-b5c7-be0029481b02-catalog-content\") pod \"community-operators-hl98f\" (UID: \"d94be017-d632-422d-b5c7-be0029481b02\") " pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.176357 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94be017-d632-422d-b5c7-be0029481b02-utilities\") pod \"community-operators-hl98f\" (UID: \"d94be017-d632-422d-b5c7-be0029481b02\") " pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.279053 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94be017-d632-422d-b5c7-be0029481b02-catalog-content\") pod \"community-operators-hl98f\" (UID: \"d94be017-d632-422d-b5c7-be0029481b02\") " pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.279257 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94be017-d632-422d-b5c7-be0029481b02-utilities\") pod \"community-operators-hl98f\" (UID: \"d94be017-d632-422d-b5c7-be0029481b02\") " pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.279329 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92lzg\" (UniqueName: \"kubernetes.io/projected/d94be017-d632-422d-b5c7-be0029481b02-kube-api-access-92lzg\") pod \"community-operators-hl98f\" (UID: \"d94be017-d632-422d-b5c7-be0029481b02\") " pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.279945 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94be017-d632-422d-b5c7-be0029481b02-utilities\") pod \"community-operators-hl98f\" (UID: \"d94be017-d632-422d-b5c7-be0029481b02\") " pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.279943 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94be017-d632-422d-b5c7-be0029481b02-catalog-content\") pod \"community-operators-hl98f\" (UID: \"d94be017-d632-422d-b5c7-be0029481b02\") " pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.308970 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92lzg\" (UniqueName: \"kubernetes.io/projected/d94be017-d632-422d-b5c7-be0029481b02-kube-api-access-92lzg\") pod \"community-operators-hl98f\" (UID: \"d94be017-d632-422d-b5c7-be0029481b02\") " pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.367153 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.833828 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-wlldd"] Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.857298 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.866765 4784 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.867241 4784 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lblvw" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.867406 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.922184 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d0d8decf-1b4d-447f-9a00-301cb0c4b716-frr-sockets\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.922244 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d0d8decf-1b4d-447f-9a00-301cb0c4b716-reloader\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.922287 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d0d8decf-1b4d-447f-9a00-301cb0c4b716-frr-conf\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.922311 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0d8decf-1b4d-447f-9a00-301cb0c4b716-metrics-certs\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.922343 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d0d8decf-1b4d-447f-9a00-301cb0c4b716-metrics\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.922375 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d0d8decf-1b4d-447f-9a00-301cb0c4b716-frr-startup\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.922396 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79xb8\" (UniqueName: \"kubernetes.io/projected/d0d8decf-1b4d-447f-9a00-301cb0c4b716-kube-api-access-79xb8\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.979063 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr"] Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.980046 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr"] Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.980157 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" Jan 23 06:35:43 crc kubenswrapper[4784]: I0123 06:35:43.983391 4784 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.035623 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-5j8cg"] Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.038104 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5j8cg" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.042914 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d0d8decf-1b4d-447f-9a00-301cb0c4b716-frr-conf\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.042951 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0d8decf-1b4d-447f-9a00-301cb0c4b716-metrics-certs\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.042985 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d0d8decf-1b4d-447f-9a00-301cb0c4b716-metrics\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.043016 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d0d8decf-1b4d-447f-9a00-301cb0c4b716-frr-startup\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.043033 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79xb8\" (UniqueName: \"kubernetes.io/projected/d0d8decf-1b4d-447f-9a00-301cb0c4b716-kube-api-access-79xb8\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.043065 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k87hb\" (UniqueName: \"kubernetes.io/projected/2840186f-b624-458b-ba7b-988df9ebf049-kube-api-access-k87hb\") pod \"frr-k8s-webhook-server-7df86c4f6c-qn8wr\" (UID: \"2840186f-b624-458b-ba7b-988df9ebf049\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.043092 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2840186f-b624-458b-ba7b-988df9ebf049-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qn8wr\" (UID: \"2840186f-b624-458b-ba7b-988df9ebf049\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.043116 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d0d8decf-1b4d-447f-9a00-301cb0c4b716-frr-sockets\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.043138 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d0d8decf-1b4d-447f-9a00-301cb0c4b716-reloader\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.043595 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d0d8decf-1b4d-447f-9a00-301cb0c4b716-reloader\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.043850 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d0d8decf-1b4d-447f-9a00-301cb0c4b716-frr-conf\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.044850 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d0d8decf-1b4d-447f-9a00-301cb0c4b716-metrics\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.045254 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d0d8decf-1b4d-447f-9a00-301cb0c4b716-frr-sockets\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.045775 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d0d8decf-1b4d-447f-9a00-301cb0c4b716-frr-startup\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: E0123 06:35:44.045811 4784 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 23 06:35:44 crc kubenswrapper[4784]: E0123 06:35:44.045869 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0d8decf-1b4d-447f-9a00-301cb0c4b716-metrics-certs podName:d0d8decf-1b4d-447f-9a00-301cb0c4b716 nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.545841297 +0000 UTC m=+947.778349271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0d8decf-1b4d-447f-9a00-301cb0c4b716-metrics-certs") pod "frr-k8s-wlldd" (UID: "d0d8decf-1b4d-447f-9a00-301cb0c4b716") : secret "frr-k8s-certs-secret" not found Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.047850 4784 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.048156 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.048191 4784 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.048304 4784 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-tdhxf" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.086820 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-5hnm9"] Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.086948 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79xb8\" (UniqueName: \"kubernetes.io/projected/d0d8decf-1b4d-447f-9a00-301cb0c4b716-kube-api-access-79xb8\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.088288 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-5hnm9" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.103851 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-5hnm9"] Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.104917 4784 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.145673 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2840186f-b624-458b-ba7b-988df9ebf049-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qn8wr\" (UID: \"2840186f-b624-458b-ba7b-988df9ebf049\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.145794 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0579fb88-f47a-4ef8-bd01-2dcb5aae28ac-cert\") pod \"controller-6968d8fdc4-5hnm9\" (UID: \"0579fb88-f47a-4ef8-bd01-2dcb5aae28ac\") " pod="metallb-system/controller-6968d8fdc4-5hnm9" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.145819 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0579fb88-f47a-4ef8-bd01-2dcb5aae28ac-metrics-certs\") pod \"controller-6968d8fdc4-5hnm9\" (UID: \"0579fb88-f47a-4ef8-bd01-2dcb5aae28ac\") " pod="metallb-system/controller-6968d8fdc4-5hnm9" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.145859 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-metrics-certs\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.145888 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msgvs\" (UniqueName: \"kubernetes.io/projected/cb4d7810-332e-403f-96e6-827f7b0881e2-kube-api-access-msgvs\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.145966 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kpsl\" (UniqueName: \"kubernetes.io/projected/0579fb88-f47a-4ef8-bd01-2dcb5aae28ac-kube-api-access-2kpsl\") pod \"controller-6968d8fdc4-5hnm9\" (UID: \"0579fb88-f47a-4ef8-bd01-2dcb5aae28ac\") " pod="metallb-system/controller-6968d8fdc4-5hnm9" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.146013 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.146032 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cb4d7810-332e-403f-96e6-827f7b0881e2-metallb-excludel2\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.146060 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k87hb\" (UniqueName: \"kubernetes.io/projected/2840186f-b624-458b-ba7b-988df9ebf049-kube-api-access-k87hb\") pod \"frr-k8s-webhook-server-7df86c4f6c-qn8wr\" (UID: \"2840186f-b624-458b-ba7b-988df9ebf049\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" Jan 23 06:35:44 crc kubenswrapper[4784]: E0123 06:35:44.146702 4784 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 23 06:35:44 crc kubenswrapper[4784]: E0123 06:35:44.146879 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2840186f-b624-458b-ba7b-988df9ebf049-cert podName:2840186f-b624-458b-ba7b-988df9ebf049 nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.646740706 +0000 UTC m=+947.879248670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2840186f-b624-458b-ba7b-988df9ebf049-cert") pod "frr-k8s-webhook-server-7df86c4f6c-qn8wr" (UID: "2840186f-b624-458b-ba7b-988df9ebf049") : secret "frr-k8s-webhook-server-cert" not found Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.173627 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k87hb\" (UniqueName: \"kubernetes.io/projected/2840186f-b624-458b-ba7b-988df9ebf049-kube-api-access-k87hb\") pod \"frr-k8s-webhook-server-7df86c4f6c-qn8wr\" (UID: \"2840186f-b624-458b-ba7b-988df9ebf049\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.216388 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hl98f"] Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.246814 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.246858 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cb4d7810-332e-403f-96e6-827f7b0881e2-metallb-excludel2\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.246915 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0579fb88-f47a-4ef8-bd01-2dcb5aae28ac-cert\") pod \"controller-6968d8fdc4-5hnm9\" (UID: \"0579fb88-f47a-4ef8-bd01-2dcb5aae28ac\") " pod="metallb-system/controller-6968d8fdc4-5hnm9" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.246930 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0579fb88-f47a-4ef8-bd01-2dcb5aae28ac-metrics-certs\") pod \"controller-6968d8fdc4-5hnm9\" (UID: \"0579fb88-f47a-4ef8-bd01-2dcb5aae28ac\") " pod="metallb-system/controller-6968d8fdc4-5hnm9" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.246948 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-metrics-certs\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.246974 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msgvs\" (UniqueName: \"kubernetes.io/projected/cb4d7810-332e-403f-96e6-827f7b0881e2-kube-api-access-msgvs\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.247021 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kpsl\" (UniqueName: \"kubernetes.io/projected/0579fb88-f47a-4ef8-bd01-2dcb5aae28ac-kube-api-access-2kpsl\") pod \"controller-6968d8fdc4-5hnm9\" (UID: \"0579fb88-f47a-4ef8-bd01-2dcb5aae28ac\") " pod="metallb-system/controller-6968d8fdc4-5hnm9" Jan 23 06:35:44 crc kubenswrapper[4784]: E0123 06:35:44.247584 4784 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 23 06:35:44 crc kubenswrapper[4784]: E0123 06:35:44.247647 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-metrics-certs podName:cb4d7810-332e-403f-96e6-827f7b0881e2 nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.747626496 +0000 UTC m=+947.980134480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-metrics-certs") pod "speaker-5j8cg" (UID: "cb4d7810-332e-403f-96e6-827f7b0881e2") : secret "speaker-certs-secret" not found Jan 23 06:35:44 crc kubenswrapper[4784]: E0123 06:35:44.248628 4784 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 06:35:44 crc kubenswrapper[4784]: E0123 06:35:44.248706 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist podName:cb4d7810-332e-403f-96e6-827f7b0881e2 nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.748679022 +0000 UTC m=+947.981186996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist") pod "speaker-5j8cg" (UID: "cb4d7810-332e-403f-96e6-827f7b0881e2") : secret "metallb-memberlist" not found Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.248781 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cb4d7810-332e-403f-96e6-827f7b0881e2-metallb-excludel2\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.280770 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kpsl\" (UniqueName: \"kubernetes.io/projected/0579fb88-f47a-4ef8-bd01-2dcb5aae28ac-kube-api-access-2kpsl\") pod \"controller-6968d8fdc4-5hnm9\" (UID: \"0579fb88-f47a-4ef8-bd01-2dcb5aae28ac\") " pod="metallb-system/controller-6968d8fdc4-5hnm9" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.283398 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0579fb88-f47a-4ef8-bd01-2dcb5aae28ac-cert\") pod \"controller-6968d8fdc4-5hnm9\" (UID: \"0579fb88-f47a-4ef8-bd01-2dcb5aae28ac\") " pod="metallb-system/controller-6968d8fdc4-5hnm9" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.284611 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0579fb88-f47a-4ef8-bd01-2dcb5aae28ac-metrics-certs\") pod \"controller-6968d8fdc4-5hnm9\" (UID: \"0579fb88-f47a-4ef8-bd01-2dcb5aae28ac\") " pod="metallb-system/controller-6968d8fdc4-5hnm9" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.324007 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msgvs\" (UniqueName: \"kubernetes.io/projected/cb4d7810-332e-403f-96e6-827f7b0881e2-kube-api-access-msgvs\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.527052 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-5hnm9" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.722593 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2840186f-b624-458b-ba7b-988df9ebf049-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qn8wr\" (UID: \"2840186f-b624-458b-ba7b-988df9ebf049\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.722681 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0d8decf-1b4d-447f-9a00-301cb0c4b716-metrics-certs\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.728701 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2840186f-b624-458b-ba7b-988df9ebf049-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qn8wr\" (UID: \"2840186f-b624-458b-ba7b-988df9ebf049\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.738449 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0d8decf-1b4d-447f-9a00-301cb0c4b716-metrics-certs\") pod \"frr-k8s-wlldd\" (UID: \"d0d8decf-1b4d-447f-9a00-301cb0c4b716\") " pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.739978 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl98f" event={"ID":"d94be017-d632-422d-b5c7-be0029481b02","Type":"ContainerStarted","Data":"c1a420ae2319ce56a14df32adc76aca1c6de232978dd4e2d2721a093f6c38864"} Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.824193 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.824283 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-metrics-certs\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:44 crc kubenswrapper[4784]: E0123 06:35:44.824478 4784 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 06:35:44 crc kubenswrapper[4784]: E0123 06:35:44.824609 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist podName:cb4d7810-332e-403f-96e6-827f7b0881e2 nodeName:}" failed. No retries permitted until 2026-01-23 06:35:45.824581475 +0000 UTC m=+949.057089449 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist") pod "speaker-5j8cg" (UID: "cb4d7810-332e-403f-96e6-827f7b0881e2") : secret "metallb-memberlist" not found Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.829441 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-metrics-certs\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.834858 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wlldd" Jan 23 06:35:44 crc kubenswrapper[4784]: I0123 06:35:44.928781 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" Jan 23 06:35:45 crc kubenswrapper[4784]: I0123 06:35:45.841407 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:45 crc kubenswrapper[4784]: E0123 06:35:45.841797 4784 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 06:35:45 crc kubenswrapper[4784]: E0123 06:35:45.841942 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist podName:cb4d7810-332e-403f-96e6-827f7b0881e2 nodeName:}" failed. No retries permitted until 2026-01-23 06:35:47.841924627 +0000 UTC m=+951.074432601 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist") pod "speaker-5j8cg" (UID: "cb4d7810-332e-403f-96e6-827f7b0881e2") : secret "metallb-memberlist" not found Jan 23 06:35:47 crc kubenswrapper[4784]: I0123 06:35:47.910446 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:47 crc kubenswrapper[4784]: E0123 06:35:47.910990 4784 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 06:35:47 crc kubenswrapper[4784]: E0123 06:35:47.911111 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist podName:cb4d7810-332e-403f-96e6-827f7b0881e2 nodeName:}" failed. No retries permitted until 2026-01-23 06:35:51.91108388 +0000 UTC m=+955.143591854 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist") pod "speaker-5j8cg" (UID: "cb4d7810-332e-403f-96e6-827f7b0881e2") : secret "metallb-memberlist" not found Jan 23 06:35:48 crc kubenswrapper[4784]: I0123 06:35:48.661442 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:48 crc kubenswrapper[4784]: I0123 06:35:48.661712 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:48 crc kubenswrapper[4784]: I0123 06:35:48.825128 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:49 crc kubenswrapper[4784]: I0123 06:35:49.969139 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:50 crc kubenswrapper[4784]: I0123 06:35:50.016648 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mzfw"] Jan 23 06:35:51 crc kubenswrapper[4784]: I0123 06:35:51.116550 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr"] Jan 23 06:35:51 crc kubenswrapper[4784]: I0123 06:35:51.294513 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-5hnm9"] Jan 23 06:35:51 crc kubenswrapper[4784]: I0123 06:35:51.810518 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6mzfw" podUID="d722da2b-9410-4157-bebf-f1d717bdf91d" containerName="registry-server" containerID="cri-o://23942d32826e2de38b897f9efa7ff68f861d9c9d607d80d30b307684390dad7e" gracePeriod=2 Jan 23 06:35:51 crc kubenswrapper[4784]: I0123 06:35:51.983704 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:35:51 crc kubenswrapper[4784]: E0123 06:35:51.983889 4784 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 06:35:51 crc kubenswrapper[4784]: E0123 06:35:51.983967 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist podName:cb4d7810-332e-403f-96e6-827f7b0881e2 nodeName:}" failed. No retries permitted until 2026-01-23 06:35:59.983949478 +0000 UTC m=+963.216457472 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist") pod "speaker-5j8cg" (UID: "cb4d7810-332e-403f-96e6-827f7b0881e2") : secret "metallb-memberlist" not found Jan 23 06:35:53 crc kubenswrapper[4784]: W0123 06:35:53.263602 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0579fb88_f47a_4ef8_bd01_2dcb5aae28ac.slice/crio-5a544d7a83f1f0489ab74bb70554a81941377b0d33d551f024f1ad922cefb138 WatchSource:0}: Error finding container 5a544d7a83f1f0489ab74bb70554a81941377b0d33d551f024f1ad922cefb138: Status 404 returned error can't find the container with id 5a544d7a83f1f0489ab74bb70554a81941377b0d33d551f024f1ad922cefb138 Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.530482 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.603159 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.603366 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.692719 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvcxw\" (UniqueName: \"kubernetes.io/projected/d722da2b-9410-4157-bebf-f1d717bdf91d-kube-api-access-dvcxw\") pod \"d722da2b-9410-4157-bebf-f1d717bdf91d\" (UID: \"d722da2b-9410-4157-bebf-f1d717bdf91d\") " Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.692958 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d722da2b-9410-4157-bebf-f1d717bdf91d-catalog-content\") pod \"d722da2b-9410-4157-bebf-f1d717bdf91d\" (UID: \"d722da2b-9410-4157-bebf-f1d717bdf91d\") " Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.693052 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d722da2b-9410-4157-bebf-f1d717bdf91d-utilities\") pod \"d722da2b-9410-4157-bebf-f1d717bdf91d\" (UID: \"d722da2b-9410-4157-bebf-f1d717bdf91d\") " Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.697841 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d722da2b-9410-4157-bebf-f1d717bdf91d-utilities" (OuterVolumeSpecName: "utilities") pod "d722da2b-9410-4157-bebf-f1d717bdf91d" (UID: "d722da2b-9410-4157-bebf-f1d717bdf91d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.707305 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d722da2b-9410-4157-bebf-f1d717bdf91d-kube-api-access-dvcxw" (OuterVolumeSpecName: "kube-api-access-dvcxw") pod "d722da2b-9410-4157-bebf-f1d717bdf91d" (UID: "d722da2b-9410-4157-bebf-f1d717bdf91d"). InnerVolumeSpecName "kube-api-access-dvcxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.724724 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d722da2b-9410-4157-bebf-f1d717bdf91d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d722da2b-9410-4157-bebf-f1d717bdf91d" (UID: "d722da2b-9410-4157-bebf-f1d717bdf91d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.796770 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvcxw\" (UniqueName: \"kubernetes.io/projected/d722da2b-9410-4157-bebf-f1d717bdf91d-kube-api-access-dvcxw\") on node \"crc\" DevicePath \"\"" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.796828 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d722da2b-9410-4157-bebf-f1d717bdf91d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.796844 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d722da2b-9410-4157-bebf-f1d717bdf91d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.826247 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpw4g" event={"ID":"d0d8177c-add0-4980-ae22-44a0ede0a599","Type":"ContainerStarted","Data":"886722e6e37cf37c783ce2752b1fb60089c863020a7c5c918d66b7d41584256e"} Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.828586 4784 generic.go:334] "Generic (PLEG): container finished" podID="d94be017-d632-422d-b5c7-be0029481b02" containerID="1a8b4bee0668444978e777207f00942e8e9dbcfdb8381ebdb0164e24756d0092" exitCode=0 Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.828803 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl98f" event={"ID":"d94be017-d632-422d-b5c7-be0029481b02","Type":"ContainerDied","Data":"1a8b4bee0668444978e777207f00942e8e9dbcfdb8381ebdb0164e24756d0092"} Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.832711 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" event={"ID":"2840186f-b624-458b-ba7b-988df9ebf049","Type":"ContainerStarted","Data":"df44496f60cb8cd433141e405f9e594aad5b9867cb65a6738c1f33ccb5f946ef"} Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.834243 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wlldd" event={"ID":"d0d8decf-1b4d-447f-9a00-301cb0c4b716","Type":"ContainerStarted","Data":"d6706acbc2d5d1bf4e1df538d8bad13c0566a3763b750057cff50f45e1914d91"} Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.836602 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-5hnm9" event={"ID":"0579fb88-f47a-4ef8-bd01-2dcb5aae28ac","Type":"ContainerStarted","Data":"6d3fc7283759cc839395c5c591ae4858c01511d94e674be6b04084be0f77201c"} Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.836666 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-5hnm9" event={"ID":"0579fb88-f47a-4ef8-bd01-2dcb5aae28ac","Type":"ContainerStarted","Data":"8edd9935da26d105215e3a092cb8b73e5a93e80a90c85539445e331f1b07bd27"} Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.836686 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-5hnm9" event={"ID":"0579fb88-f47a-4ef8-bd01-2dcb5aae28ac","Type":"ContainerStarted","Data":"5a544d7a83f1f0489ab74bb70554a81941377b0d33d551f024f1ad922cefb138"} Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.836775 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-5hnm9" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.839423 4784 generic.go:334] "Generic (PLEG): container finished" podID="d722da2b-9410-4157-bebf-f1d717bdf91d" containerID="23942d32826e2de38b897f9efa7ff68f861d9c9d607d80d30b307684390dad7e" exitCode=0 Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.839476 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mzfw" event={"ID":"d722da2b-9410-4157-bebf-f1d717bdf91d","Type":"ContainerDied","Data":"23942d32826e2de38b897f9efa7ff68f861d9c9d607d80d30b307684390dad7e"} Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.839511 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mzfw" event={"ID":"d722da2b-9410-4157-bebf-f1d717bdf91d","Type":"ContainerDied","Data":"2336b3909a59cd0f75010823d5c8049b6f21cc4b9f12e918b347bd0e0f2cc86e"} Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.839539 4784 scope.go:117] "RemoveContainer" containerID="23942d32826e2de38b897f9efa7ff68f861d9c9d607d80d30b307684390dad7e" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.840671 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mzfw" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.880852 4784 scope.go:117] "RemoveContainer" containerID="960f2dc25a42e98254062217226244025bbc4aa1eccb0d96ec65503b6843d858" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.909255 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-5hnm9" podStartSLOduration=10.909234655 podStartE2EDuration="10.909234655s" podCreationTimestamp="2026-01-23 06:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:53.908299082 +0000 UTC m=+957.140807056" watchObservedRunningTime="2026-01-23 06:35:53.909234655 +0000 UTC m=+957.141742639" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.930059 4784 scope.go:117] "RemoveContainer" containerID="2e97711c27ad0d7eb2407f289ddf2f2601b7cd01f27d557f24f04350d2c64e75" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.937487 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mzfw"] Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.943476 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mzfw"] Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.951870 4784 scope.go:117] "RemoveContainer" containerID="23942d32826e2de38b897f9efa7ff68f861d9c9d607d80d30b307684390dad7e" Jan 23 06:35:53 crc kubenswrapper[4784]: E0123 06:35:53.952575 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23942d32826e2de38b897f9efa7ff68f861d9c9d607d80d30b307684390dad7e\": container with ID starting with 23942d32826e2de38b897f9efa7ff68f861d9c9d607d80d30b307684390dad7e not found: ID does not exist" containerID="23942d32826e2de38b897f9efa7ff68f861d9c9d607d80d30b307684390dad7e" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.952640 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23942d32826e2de38b897f9efa7ff68f861d9c9d607d80d30b307684390dad7e"} err="failed to get container status \"23942d32826e2de38b897f9efa7ff68f861d9c9d607d80d30b307684390dad7e\": rpc error: code = NotFound desc = could not find container \"23942d32826e2de38b897f9efa7ff68f861d9c9d607d80d30b307684390dad7e\": container with ID starting with 23942d32826e2de38b897f9efa7ff68f861d9c9d607d80d30b307684390dad7e not found: ID does not exist" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.952685 4784 scope.go:117] "RemoveContainer" containerID="960f2dc25a42e98254062217226244025bbc4aa1eccb0d96ec65503b6843d858" Jan 23 06:35:53 crc kubenswrapper[4784]: E0123 06:35:53.953321 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"960f2dc25a42e98254062217226244025bbc4aa1eccb0d96ec65503b6843d858\": container with ID starting with 960f2dc25a42e98254062217226244025bbc4aa1eccb0d96ec65503b6843d858 not found: ID does not exist" containerID="960f2dc25a42e98254062217226244025bbc4aa1eccb0d96ec65503b6843d858" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.953456 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960f2dc25a42e98254062217226244025bbc4aa1eccb0d96ec65503b6843d858"} err="failed to get container status \"960f2dc25a42e98254062217226244025bbc4aa1eccb0d96ec65503b6843d858\": rpc error: code = NotFound desc = could not find container \"960f2dc25a42e98254062217226244025bbc4aa1eccb0d96ec65503b6843d858\": container with ID starting with 960f2dc25a42e98254062217226244025bbc4aa1eccb0d96ec65503b6843d858 not found: ID does not exist" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.953574 4784 scope.go:117] "RemoveContainer" containerID="2e97711c27ad0d7eb2407f289ddf2f2601b7cd01f27d557f24f04350d2c64e75" Jan 23 06:35:53 crc kubenswrapper[4784]: E0123 06:35:53.954041 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e97711c27ad0d7eb2407f289ddf2f2601b7cd01f27d557f24f04350d2c64e75\": container with ID starting with 2e97711c27ad0d7eb2407f289ddf2f2601b7cd01f27d557f24f04350d2c64e75 not found: ID does not exist" containerID="2e97711c27ad0d7eb2407f289ddf2f2601b7cd01f27d557f24f04350d2c64e75" Jan 23 06:35:53 crc kubenswrapper[4784]: I0123 06:35:53.954079 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e97711c27ad0d7eb2407f289ddf2f2601b7cd01f27d557f24f04350d2c64e75"} err="failed to get container status \"2e97711c27ad0d7eb2407f289ddf2f2601b7cd01f27d557f24f04350d2c64e75\": rpc error: code = NotFound desc = could not find container \"2e97711c27ad0d7eb2407f289ddf2f2601b7cd01f27d557f24f04350d2c64e75\": container with ID starting with 2e97711c27ad0d7eb2407f289ddf2f2601b7cd01f27d557f24f04350d2c64e75 not found: ID does not exist" Jan 23 06:35:54 crc kubenswrapper[4784]: I0123 06:35:54.849491 4784 generic.go:334] "Generic (PLEG): container finished" podID="d0d8177c-add0-4980-ae22-44a0ede0a599" containerID="886722e6e37cf37c783ce2752b1fb60089c863020a7c5c918d66b7d41584256e" exitCode=0 Jan 23 06:35:54 crc kubenswrapper[4784]: I0123 06:35:54.851282 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpw4g" event={"ID":"d0d8177c-add0-4980-ae22-44a0ede0a599","Type":"ContainerDied","Data":"886722e6e37cf37c783ce2752b1fb60089c863020a7c5c918d66b7d41584256e"} Jan 23 06:35:55 crc kubenswrapper[4784]: I0123 06:35:55.262528 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d722da2b-9410-4157-bebf-f1d717bdf91d" path="/var/lib/kubelet/pods/d722da2b-9410-4157-bebf-f1d717bdf91d/volumes" Jan 23 06:35:55 crc kubenswrapper[4784]: I0123 06:35:55.901056 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl98f" event={"ID":"d94be017-d632-422d-b5c7-be0029481b02","Type":"ContainerStarted","Data":"1d655e518e45e6b59af133a068d6d6a6a4a340efb1dc4a81c69f9bbc14ec7043"} Jan 23 06:35:56 crc kubenswrapper[4784]: I0123 06:35:56.928832 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpw4g" event={"ID":"d0d8177c-add0-4980-ae22-44a0ede0a599","Type":"ContainerStarted","Data":"c850d8592528b379fe09e55ed25257df158e2d5bcbf2eea3969fa0a1c7232271"} Jan 23 06:35:56 crc kubenswrapper[4784]: I0123 06:35:56.932601 4784 generic.go:334] "Generic (PLEG): container finished" podID="d94be017-d632-422d-b5c7-be0029481b02" containerID="1d655e518e45e6b59af133a068d6d6a6a4a340efb1dc4a81c69f9bbc14ec7043" exitCode=0 Jan 23 06:35:56 crc kubenswrapper[4784]: I0123 06:35:56.932649 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl98f" event={"ID":"d94be017-d632-422d-b5c7-be0029481b02","Type":"ContainerDied","Data":"1d655e518e45e6b59af133a068d6d6a6a4a340efb1dc4a81c69f9bbc14ec7043"} Jan 23 06:35:56 crc kubenswrapper[4784]: I0123 06:35:56.962080 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bpw4g" podStartSLOduration=3.351004233 podStartE2EDuration="20.962050093s" podCreationTimestamp="2026-01-23 06:35:36 +0000 UTC" firstStartedPulling="2026-01-23 06:35:38.593156657 +0000 UTC m=+941.825664631" lastFinishedPulling="2026-01-23 06:35:56.204202497 +0000 UTC m=+959.436710491" observedRunningTime="2026-01-23 06:35:56.95015248 +0000 UTC m=+960.182660464" watchObservedRunningTime="2026-01-23 06:35:56.962050093 +0000 UTC m=+960.194558097" Jan 23 06:35:57 crc kubenswrapper[4784]: I0123 06:35:57.945278 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl98f" event={"ID":"d94be017-d632-422d-b5c7-be0029481b02","Type":"ContainerStarted","Data":"e06e282ac07056c18e9824a4a5287af8cd91e98485237e529676b318cfccad41"} Jan 23 06:35:59 crc kubenswrapper[4784]: I0123 06:35:59.995134 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:36:00 crc kubenswrapper[4784]: I0123 06:36:00.003526 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cb4d7810-332e-403f-96e6-827f7b0881e2-memberlist\") pod \"speaker-5j8cg\" (UID: \"cb4d7810-332e-403f-96e6-827f7b0881e2\") " pod="metallb-system/speaker-5j8cg" Jan 23 06:36:00 crc kubenswrapper[4784]: I0123 06:36:00.044890 4784 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-tdhxf" Jan 23 06:36:00 crc kubenswrapper[4784]: I0123 06:36:00.059638 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5j8cg" Jan 23 06:36:03 crc kubenswrapper[4784]: I0123 06:36:03.368705 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:36:03 crc kubenswrapper[4784]: I0123 06:36:03.369222 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:36:03 crc kubenswrapper[4784]: I0123 06:36:03.558625 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:36:03 crc kubenswrapper[4784]: I0123 06:36:03.588540 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hl98f" podStartSLOduration=17.836626479 podStartE2EDuration="21.588515368s" podCreationTimestamp="2026-01-23 06:35:42 +0000 UTC" firstStartedPulling="2026-01-23 06:35:53.830374627 +0000 UTC m=+957.062882601" lastFinishedPulling="2026-01-23 06:35:57.582263516 +0000 UTC m=+960.814771490" observedRunningTime="2026-01-23 06:35:57.980770479 +0000 UTC m=+961.213278453" watchObservedRunningTime="2026-01-23 06:36:03.588515368 +0000 UTC m=+966.821023342" Jan 23 06:36:04 crc kubenswrapper[4784]: I0123 06:36:04.264478 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:36:04 crc kubenswrapper[4784]: I0123 06:36:04.383461 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hl98f"] Jan 23 06:36:04 crc kubenswrapper[4784]: I0123 06:36:04.534663 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-5hnm9" Jan 23 06:36:06 crc kubenswrapper[4784]: I0123 06:36:06.226720 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hl98f" podUID="d94be017-d632-422d-b5c7-be0029481b02" containerName="registry-server" containerID="cri-o://e06e282ac07056c18e9824a4a5287af8cd91e98485237e529676b318cfccad41" gracePeriod=2 Jan 23 06:36:07 crc kubenswrapper[4784]: I0123 06:36:07.243501 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bpw4g" Jan 23 06:36:07 crc kubenswrapper[4784]: I0123 06:36:07.247043 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bpw4g" Jan 23 06:36:07 crc kubenswrapper[4784]: I0123 06:36:07.304215 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bpw4g" Jan 23 06:36:08 crc kubenswrapper[4784]: I0123 06:36:08.505542 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bpw4g" Jan 23 06:36:08 crc kubenswrapper[4784]: I0123 06:36:08.740441 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bpw4g"] Jan 23 06:36:08 crc kubenswrapper[4784]: I0123 06:36:08.823254 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g2n5t"] Jan 23 06:36:08 crc kubenswrapper[4784]: I0123 06:36:08.823528 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g2n5t" podUID="8272bc90-fdfc-49f1-90c1-cec4281786f0" containerName="registry-server" containerID="cri-o://971226495d30c886dfbbe057a2c7fac43494180819d698cdf705b669d20f4677" gracePeriod=2 Jan 23 06:36:09 crc kubenswrapper[4784]: I0123 06:36:09.301446 4784 generic.go:334] "Generic (PLEG): container finished" podID="8272bc90-fdfc-49f1-90c1-cec4281786f0" containerID="971226495d30c886dfbbe057a2c7fac43494180819d698cdf705b669d20f4677" exitCode=0 Jan 23 06:36:09 crc kubenswrapper[4784]: I0123 06:36:09.301599 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2n5t" event={"ID":"8272bc90-fdfc-49f1-90c1-cec4281786f0","Type":"ContainerDied","Data":"971226495d30c886dfbbe057a2c7fac43494180819d698cdf705b669d20f4677"} Jan 23 06:36:09 crc kubenswrapper[4784]: I0123 06:36:09.304975 4784 generic.go:334] "Generic (PLEG): container finished" podID="d94be017-d632-422d-b5c7-be0029481b02" containerID="e06e282ac07056c18e9824a4a5287af8cd91e98485237e529676b318cfccad41" exitCode=0 Jan 23 06:36:09 crc kubenswrapper[4784]: I0123 06:36:09.305046 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl98f" event={"ID":"d94be017-d632-422d-b5c7-be0029481b02","Type":"ContainerDied","Data":"e06e282ac07056c18e9824a4a5287af8cd91e98485237e529676b318cfccad41"} Jan 23 06:36:10 crc kubenswrapper[4784]: E0123 06:36:10.093259 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862" Jan 23 06:36:10 crc kubenswrapper[4784]: E0123 06:36:10.094054 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:cp-frr-files,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,Command:[/bin/sh -c cp -rLf /tmp/frr/* /etc/frr/],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:frr-startup,ReadOnly:false,MountPath:/tmp/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:frr-conf,ReadOnly:false,MountPath:/etc/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79xb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*100,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*101,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-wlldd_metallb-system(d0d8decf-1b4d-447f-9a00-301cb0c4b716): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:36:10 crc kubenswrapper[4784]: E0123 06:36:10.095242 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="metallb-system/frr-k8s-wlldd" podUID="d0d8decf-1b4d-447f-9a00-301cb0c4b716" Jan 23 06:36:10 crc kubenswrapper[4784]: E0123 06:36:10.216428 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862" Jan 23 06:36:10 crc kubenswrapper[4784]: E0123 06:36:10.216705 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:frr-k8s-webhook-server,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,Command:[/frr-k8s],Args:[--log-level=debug --webhook-mode=onlywebhook --disable-cert-rotation=true --namespace=$(NAMESPACE) --metrics-bind-address=:7572],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:monitoring,HostPort:0,ContainerPort:7572,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k87hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000700000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-webhook-server-7df86c4f6c-qn8wr_metallb-system(2840186f-b624-458b-ba7b-988df9ebf049): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:36:10 crc kubenswrapper[4784]: E0123 06:36:10.218799 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" podUID="2840186f-b624-458b-ba7b-988df9ebf049" Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.332151 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5j8cg" event={"ID":"cb4d7810-332e-403f-96e6-827f7b0881e2","Type":"ContainerStarted","Data":"583a23fe1c7b9f186416481feb19e747ad0aeff19786eb3a5c8dcde7f4e6b1ad"} Jan 23 06:36:10 crc kubenswrapper[4784]: E0123 06:36:10.335338 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" podUID="2840186f-b624-458b-ba7b-988df9ebf049" Jan 23 06:36:10 crc kubenswrapper[4784]: E0123 06:36:10.335427 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-wlldd" podUID="d0d8decf-1b4d-447f-9a00-301cb0c4b716" Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.463279 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.472658 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.524060 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8272bc90-fdfc-49f1-90c1-cec4281786f0-utilities\") pod \"8272bc90-fdfc-49f1-90c1-cec4281786f0\" (UID: \"8272bc90-fdfc-49f1-90c1-cec4281786f0\") " Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.524132 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94be017-d632-422d-b5c7-be0029481b02-utilities\") pod \"d94be017-d632-422d-b5c7-be0029481b02\" (UID: \"d94be017-d632-422d-b5c7-be0029481b02\") " Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.524172 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94be017-d632-422d-b5c7-be0029481b02-catalog-content\") pod \"d94be017-d632-422d-b5c7-be0029481b02\" (UID: \"d94be017-d632-422d-b5c7-be0029481b02\") " Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.524288 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8272bc90-fdfc-49f1-90c1-cec4281786f0-catalog-content\") pod \"8272bc90-fdfc-49f1-90c1-cec4281786f0\" (UID: \"8272bc90-fdfc-49f1-90c1-cec4281786f0\") " Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.524329 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kffm8\" (UniqueName: \"kubernetes.io/projected/8272bc90-fdfc-49f1-90c1-cec4281786f0-kube-api-access-kffm8\") pod \"8272bc90-fdfc-49f1-90c1-cec4281786f0\" (UID: \"8272bc90-fdfc-49f1-90c1-cec4281786f0\") " Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.524413 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92lzg\" (UniqueName: \"kubernetes.io/projected/d94be017-d632-422d-b5c7-be0029481b02-kube-api-access-92lzg\") pod \"d94be017-d632-422d-b5c7-be0029481b02\" (UID: \"d94be017-d632-422d-b5c7-be0029481b02\") " Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.525460 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8272bc90-fdfc-49f1-90c1-cec4281786f0-utilities" (OuterVolumeSpecName: "utilities") pod "8272bc90-fdfc-49f1-90c1-cec4281786f0" (UID: "8272bc90-fdfc-49f1-90c1-cec4281786f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.526361 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d94be017-d632-422d-b5c7-be0029481b02-utilities" (OuterVolumeSpecName: "utilities") pod "d94be017-d632-422d-b5c7-be0029481b02" (UID: "d94be017-d632-422d-b5c7-be0029481b02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.535252 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8272bc90-fdfc-49f1-90c1-cec4281786f0-kube-api-access-kffm8" (OuterVolumeSpecName: "kube-api-access-kffm8") pod "8272bc90-fdfc-49f1-90c1-cec4281786f0" (UID: "8272bc90-fdfc-49f1-90c1-cec4281786f0"). InnerVolumeSpecName "kube-api-access-kffm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.541936 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d94be017-d632-422d-b5c7-be0029481b02-kube-api-access-92lzg" (OuterVolumeSpecName: "kube-api-access-92lzg") pod "d94be017-d632-422d-b5c7-be0029481b02" (UID: "d94be017-d632-422d-b5c7-be0029481b02"). InnerVolumeSpecName "kube-api-access-92lzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.622155 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8272bc90-fdfc-49f1-90c1-cec4281786f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8272bc90-fdfc-49f1-90c1-cec4281786f0" (UID: "8272bc90-fdfc-49f1-90c1-cec4281786f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.627972 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92lzg\" (UniqueName: \"kubernetes.io/projected/d94be017-d632-422d-b5c7-be0029481b02-kube-api-access-92lzg\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.628022 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8272bc90-fdfc-49f1-90c1-cec4281786f0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.628046 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94be017-d632-422d-b5c7-be0029481b02-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.628060 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8272bc90-fdfc-49f1-90c1-cec4281786f0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.628072 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kffm8\" (UniqueName: \"kubernetes.io/projected/8272bc90-fdfc-49f1-90c1-cec4281786f0-kube-api-access-kffm8\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.681950 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d94be017-d632-422d-b5c7-be0029481b02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d94be017-d632-422d-b5c7-be0029481b02" (UID: "d94be017-d632-422d-b5c7-be0029481b02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:36:10 crc kubenswrapper[4784]: I0123 06:36:10.729491 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94be017-d632-422d-b5c7-be0029481b02-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.343424 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g2n5t" Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.343436 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2n5t" event={"ID":"8272bc90-fdfc-49f1-90c1-cec4281786f0","Type":"ContainerDied","Data":"1b85faf11c39a6aeaa1dff5f038345bf5245df0262fd4928da56600cb74c9496"} Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.344210 4784 scope.go:117] "RemoveContainer" containerID="971226495d30c886dfbbe057a2c7fac43494180819d698cdf705b669d20f4677" Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.348360 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hl98f" event={"ID":"d94be017-d632-422d-b5c7-be0029481b02","Type":"ContainerDied","Data":"c1a420ae2319ce56a14df32adc76aca1c6de232978dd4e2d2721a093f6c38864"} Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.348528 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hl98f" Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.351141 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5j8cg" event={"ID":"cb4d7810-332e-403f-96e6-827f7b0881e2","Type":"ContainerStarted","Data":"98b713d827ec2df811e0d71b3b66a0e6cd0284704e07dbfe2b19023991735ee9"} Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.351206 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5j8cg" event={"ID":"cb4d7810-332e-403f-96e6-827f7b0881e2","Type":"ContainerStarted","Data":"894f54e5994d620bf5cff8d60774d88dfb93e86386f5ee331aa17bd0a8ef9967"} Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.351812 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5j8cg" Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.370175 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g2n5t"] Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.375476 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g2n5t"] Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.390671 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hl98f"] Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.396314 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hl98f"] Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.411179 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-5j8cg" podStartSLOduration=28.411153911 podStartE2EDuration="28.411153911s" podCreationTimestamp="2026-01-23 06:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:36:11.40501817 +0000 UTC m=+974.637526154" watchObservedRunningTime="2026-01-23 06:36:11.411153911 +0000 UTC m=+974.643661885" Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.907897 4784 scope.go:117] "RemoveContainer" containerID="fcb2ffa1264588ab8033c6d6d0601d2a9ab8a6a38174249f306cd4d1d50e3761" Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.928239 4784 scope.go:117] "RemoveContainer" containerID="d57213ebbb2054c3d6b9dc44f89b9a91c0c3357cc268788876518f28f8fafed4" Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.956181 4784 scope.go:117] "RemoveContainer" containerID="e06e282ac07056c18e9824a4a5287af8cd91e98485237e529676b318cfccad41" Jan 23 06:36:11 crc kubenswrapper[4784]: I0123 06:36:11.982639 4784 scope.go:117] "RemoveContainer" containerID="1d655e518e45e6b59af133a068d6d6a6a4a340efb1dc4a81c69f9bbc14ec7043" Jan 23 06:36:12 crc kubenswrapper[4784]: I0123 06:36:12.001396 4784 scope.go:117] "RemoveContainer" containerID="1a8b4bee0668444978e777207f00942e8e9dbcfdb8381ebdb0164e24756d0092" Jan 23 06:36:13 crc kubenswrapper[4784]: I0123 06:36:13.267962 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8272bc90-fdfc-49f1-90c1-cec4281786f0" path="/var/lib/kubelet/pods/8272bc90-fdfc-49f1-90c1-cec4281786f0/volumes" Jan 23 06:36:13 crc kubenswrapper[4784]: I0123 06:36:13.269409 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d94be017-d632-422d-b5c7-be0029481b02" path="/var/lib/kubelet/pods/d94be017-d632-422d-b5c7-be0029481b02/volumes" Jan 23 06:36:20 crc kubenswrapper[4784]: I0123 06:36:20.064533 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-5j8cg" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.268686 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sddxz"] Jan 23 06:36:23 crc kubenswrapper[4784]: E0123 06:36:23.269627 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8272bc90-fdfc-49f1-90c1-cec4281786f0" containerName="extract-utilities" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.269643 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8272bc90-fdfc-49f1-90c1-cec4281786f0" containerName="extract-utilities" Jan 23 06:36:23 crc kubenswrapper[4784]: E0123 06:36:23.269652 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d722da2b-9410-4157-bebf-f1d717bdf91d" containerName="extract-content" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.269659 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d722da2b-9410-4157-bebf-f1d717bdf91d" containerName="extract-content" Jan 23 06:36:23 crc kubenswrapper[4784]: E0123 06:36:23.269678 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8272bc90-fdfc-49f1-90c1-cec4281786f0" containerName="extract-content" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.269686 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8272bc90-fdfc-49f1-90c1-cec4281786f0" containerName="extract-content" Jan 23 06:36:23 crc kubenswrapper[4784]: E0123 06:36:23.269694 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d94be017-d632-422d-b5c7-be0029481b02" containerName="extract-utilities" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.269703 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94be017-d632-422d-b5c7-be0029481b02" containerName="extract-utilities" Jan 23 06:36:23 crc kubenswrapper[4784]: E0123 06:36:23.269718 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d94be017-d632-422d-b5c7-be0029481b02" containerName="registry-server" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.269728 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94be017-d632-422d-b5c7-be0029481b02" containerName="registry-server" Jan 23 06:36:23 crc kubenswrapper[4784]: E0123 06:36:23.269769 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d722da2b-9410-4157-bebf-f1d717bdf91d" containerName="extract-utilities" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.269778 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d722da2b-9410-4157-bebf-f1d717bdf91d" containerName="extract-utilities" Jan 23 06:36:23 crc kubenswrapper[4784]: E0123 06:36:23.269791 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8272bc90-fdfc-49f1-90c1-cec4281786f0" containerName="registry-server" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.269797 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8272bc90-fdfc-49f1-90c1-cec4281786f0" containerName="registry-server" Jan 23 06:36:23 crc kubenswrapper[4784]: E0123 06:36:23.269806 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d722da2b-9410-4157-bebf-f1d717bdf91d" containerName="registry-server" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.269812 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d722da2b-9410-4157-bebf-f1d717bdf91d" containerName="registry-server" Jan 23 06:36:23 crc kubenswrapper[4784]: E0123 06:36:23.269824 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d94be017-d632-422d-b5c7-be0029481b02" containerName="extract-content" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.269829 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94be017-d632-422d-b5c7-be0029481b02" containerName="extract-content" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.269968 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="8272bc90-fdfc-49f1-90c1-cec4281786f0" containerName="registry-server" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.269981 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="d722da2b-9410-4157-bebf-f1d717bdf91d" containerName="registry-server" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.269991 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="d94be017-d632-422d-b5c7-be0029481b02" containerName="registry-server" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.270567 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sddxz" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.279490 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.279655 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.279799 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5cx6g" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.339544 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sddxz"] Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.346125 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn2rq\" (UniqueName: \"kubernetes.io/projected/17df6d17-2383-4d6e-8f22-e29269ad9eef-kube-api-access-nn2rq\") pod \"openstack-operator-index-sddxz\" (UID: \"17df6d17-2383-4d6e-8f22-e29269ad9eef\") " pod="openstack-operators/openstack-operator-index-sddxz" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.467152 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn2rq\" (UniqueName: \"kubernetes.io/projected/17df6d17-2383-4d6e-8f22-e29269ad9eef-kube-api-access-nn2rq\") pod \"openstack-operator-index-sddxz\" (UID: \"17df6d17-2383-4d6e-8f22-e29269ad9eef\") " pod="openstack-operators/openstack-operator-index-sddxz" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.492930 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn2rq\" (UniqueName: \"kubernetes.io/projected/17df6d17-2383-4d6e-8f22-e29269ad9eef-kube-api-access-nn2rq\") pod \"openstack-operator-index-sddxz\" (UID: \"17df6d17-2383-4d6e-8f22-e29269ad9eef\") " pod="openstack-operators/openstack-operator-index-sddxz" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.603028 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.603121 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.603194 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.604061 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba1cd80d1af05627cca4bf817be8d5ac071e1d0a3b4a67cef6e491a9167052a0"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.604126 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://ba1cd80d1af05627cca4bf817be8d5ac071e1d0a3b4a67cef6e491a9167052a0" gracePeriod=600 Jan 23 06:36:23 crc kubenswrapper[4784]: I0123 06:36:23.612236 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sddxz" Jan 23 06:36:24 crc kubenswrapper[4784]: I0123 06:36:24.580047 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" event={"ID":"2840186f-b624-458b-ba7b-988df9ebf049","Type":"ContainerStarted","Data":"869e97445b37012b7058f538dd6002468f99b431bee89f2f284ebd21aa0da63c"} Jan 23 06:36:24 crc kubenswrapper[4784]: I0123 06:36:24.582394 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" Jan 23 06:36:24 crc kubenswrapper[4784]: I0123 06:36:24.640516 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" podStartSLOduration=11.617828795 podStartE2EDuration="41.640483019s" podCreationTimestamp="2026-01-23 06:35:43 +0000 UTC" firstStartedPulling="2026-01-23 06:35:53.272503137 +0000 UTC m=+956.505011111" lastFinishedPulling="2026-01-23 06:36:23.295157361 +0000 UTC m=+986.527665335" observedRunningTime="2026-01-23 06:36:24.626620669 +0000 UTC m=+987.859128643" watchObservedRunningTime="2026-01-23 06:36:24.640483019 +0000 UTC m=+987.872990993" Jan 23 06:36:24 crc kubenswrapper[4784]: I0123 06:36:24.991202 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sddxz"] Jan 23 06:36:25 crc kubenswrapper[4784]: I0123 06:36:25.232046 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sddxz"] Jan 23 06:36:25 crc kubenswrapper[4784]: I0123 06:36:25.590020 4784 generic.go:334] "Generic (PLEG): container finished" podID="d0d8decf-1b4d-447f-9a00-301cb0c4b716" containerID="b99a27b4b01121bdc5efde88cb966e35164721df7cfbd216bd27c00133e76de6" exitCode=0 Jan 23 06:36:25 crc kubenswrapper[4784]: I0123 06:36:25.590166 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wlldd" event={"ID":"d0d8decf-1b4d-447f-9a00-301cb0c4b716","Type":"ContainerDied","Data":"b99a27b4b01121bdc5efde88cb966e35164721df7cfbd216bd27c00133e76de6"} Jan 23 06:36:25 crc kubenswrapper[4784]: I0123 06:36:25.595250 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="ba1cd80d1af05627cca4bf817be8d5ac071e1d0a3b4a67cef6e491a9167052a0" exitCode=0 Jan 23 06:36:25 crc kubenswrapper[4784]: I0123 06:36:25.595413 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"ba1cd80d1af05627cca4bf817be8d5ac071e1d0a3b4a67cef6e491a9167052a0"} Jan 23 06:36:25 crc kubenswrapper[4784]: I0123 06:36:25.595494 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"7d73b98a0e27924b52323e09dc829b98e1ffba0a17575fb7657392d46f6773c1"} Jan 23 06:36:25 crc kubenswrapper[4784]: I0123 06:36:25.595528 4784 scope.go:117] "RemoveContainer" containerID="d5f3a59b1e59c1bd355b45488149c87185e092896ddb07392d0e3d03fa4214d5" Jan 23 06:36:25 crc kubenswrapper[4784]: I0123 06:36:25.598053 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sddxz" event={"ID":"17df6d17-2383-4d6e-8f22-e29269ad9eef","Type":"ContainerStarted","Data":"a0e1d96ebb360aa958e0b707f401fb671511b77b280179030b0d2adf779c367b"} Jan 23 06:36:25 crc kubenswrapper[4784]: I0123 06:36:25.836792 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-q72fk"] Jan 23 06:36:25 crc kubenswrapper[4784]: I0123 06:36:25.838683 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q72fk" Jan 23 06:36:25 crc kubenswrapper[4784]: I0123 06:36:25.854619 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-q72fk"] Jan 23 06:36:26 crc kubenswrapper[4784]: I0123 06:36:26.002720 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmgd5\" (UniqueName: \"kubernetes.io/projected/6a5cd19e-60f5-431b-87fe-4eb262ca0f2e-kube-api-access-kmgd5\") pod \"openstack-operator-index-q72fk\" (UID: \"6a5cd19e-60f5-431b-87fe-4eb262ca0f2e\") " pod="openstack-operators/openstack-operator-index-q72fk" Jan 23 06:36:26 crc kubenswrapper[4784]: I0123 06:36:26.104087 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmgd5\" (UniqueName: \"kubernetes.io/projected/6a5cd19e-60f5-431b-87fe-4eb262ca0f2e-kube-api-access-kmgd5\") pod \"openstack-operator-index-q72fk\" (UID: \"6a5cd19e-60f5-431b-87fe-4eb262ca0f2e\") " pod="openstack-operators/openstack-operator-index-q72fk" Jan 23 06:36:26 crc kubenswrapper[4784]: I0123 06:36:26.127131 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmgd5\" (UniqueName: \"kubernetes.io/projected/6a5cd19e-60f5-431b-87fe-4eb262ca0f2e-kube-api-access-kmgd5\") pod \"openstack-operator-index-q72fk\" (UID: \"6a5cd19e-60f5-431b-87fe-4eb262ca0f2e\") " pod="openstack-operators/openstack-operator-index-q72fk" Jan 23 06:36:26 crc kubenswrapper[4784]: I0123 06:36:26.166734 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q72fk" Jan 23 06:36:26 crc kubenswrapper[4784]: I0123 06:36:26.612413 4784 generic.go:334] "Generic (PLEG): container finished" podID="d0d8decf-1b4d-447f-9a00-301cb0c4b716" containerID="b0009fd4806307c5d79d902985f6f623ab40f43576fa60afcc60a84710a8e3eb" exitCode=0 Jan 23 06:36:26 crc kubenswrapper[4784]: I0123 06:36:26.612888 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wlldd" event={"ID":"d0d8decf-1b4d-447f-9a00-301cb0c4b716","Type":"ContainerDied","Data":"b0009fd4806307c5d79d902985f6f623ab40f43576fa60afcc60a84710a8e3eb"} Jan 23 06:36:26 crc kubenswrapper[4784]: I0123 06:36:26.634052 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-q72fk"] Jan 23 06:36:27 crc kubenswrapper[4784]: I0123 06:36:27.629454 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q72fk" event={"ID":"6a5cd19e-60f5-431b-87fe-4eb262ca0f2e","Type":"ContainerStarted","Data":"acab206d2540194499fba6e456ce5093c35e2b9a1db95d5a8dc014c8ecf8f38c"} Jan 23 06:36:28 crc kubenswrapper[4784]: I0123 06:36:28.637996 4784 generic.go:334] "Generic (PLEG): container finished" podID="d0d8decf-1b4d-447f-9a00-301cb0c4b716" containerID="0a256dc5e44d0a3f2ce4c08f1a88883016a8fbaf698257477d458e1954c7a854" exitCode=0 Jan 23 06:36:28 crc kubenswrapper[4784]: I0123 06:36:28.638105 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wlldd" event={"ID":"d0d8decf-1b4d-447f-9a00-301cb0c4b716","Type":"ContainerDied","Data":"0a256dc5e44d0a3f2ce4c08f1a88883016a8fbaf698257477d458e1954c7a854"} Jan 23 06:36:28 crc kubenswrapper[4784]: I0123 06:36:28.640890 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q72fk" event={"ID":"6a5cd19e-60f5-431b-87fe-4eb262ca0f2e","Type":"ContainerStarted","Data":"1bece541d97829898ef4c383b76803c537578aaf2d471abedd63d25f0cd206e4"} Jan 23 06:36:28 crc kubenswrapper[4784]: I0123 06:36:28.642592 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sddxz" event={"ID":"17df6d17-2383-4d6e-8f22-e29269ad9eef","Type":"ContainerStarted","Data":"fdd66be47103ef5d94a0bf7dc483c6bb17abb151b8762f9cd14528b220185cb8"} Jan 23 06:36:28 crc kubenswrapper[4784]: I0123 06:36:28.642831 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-sddxz" podUID="17df6d17-2383-4d6e-8f22-e29269ad9eef" containerName="registry-server" containerID="cri-o://fdd66be47103ef5d94a0bf7dc483c6bb17abb151b8762f9cd14528b220185cb8" gracePeriod=2 Jan 23 06:36:28 crc kubenswrapper[4784]: I0123 06:36:28.686827 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-q72fk" podStartSLOduration=2.902745891 podStartE2EDuration="3.686800557s" podCreationTimestamp="2026-01-23 06:36:25 +0000 UTC" firstStartedPulling="2026-01-23 06:36:27.202991336 +0000 UTC m=+990.435499310" lastFinishedPulling="2026-01-23 06:36:27.987046012 +0000 UTC m=+991.219553976" observedRunningTime="2026-01-23 06:36:28.683486496 +0000 UTC m=+991.915994490" watchObservedRunningTime="2026-01-23 06:36:28.686800557 +0000 UTC m=+991.919308521" Jan 23 06:36:28 crc kubenswrapper[4784]: I0123 06:36:28.711050 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sddxz" podStartSLOduration=2.727315795 podStartE2EDuration="5.711019232s" podCreationTimestamp="2026-01-23 06:36:23 +0000 UTC" firstStartedPulling="2026-01-23 06:36:25.000212839 +0000 UTC m=+988.232720823" lastFinishedPulling="2026-01-23 06:36:27.983916286 +0000 UTC m=+991.216424260" observedRunningTime="2026-01-23 06:36:28.70644727 +0000 UTC m=+991.938955244" watchObservedRunningTime="2026-01-23 06:36:28.711019232 +0000 UTC m=+991.943527206" Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.069330 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sddxz" Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.257638 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn2rq\" (UniqueName: \"kubernetes.io/projected/17df6d17-2383-4d6e-8f22-e29269ad9eef-kube-api-access-nn2rq\") pod \"17df6d17-2383-4d6e-8f22-e29269ad9eef\" (UID: \"17df6d17-2383-4d6e-8f22-e29269ad9eef\") " Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.264372 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17df6d17-2383-4d6e-8f22-e29269ad9eef-kube-api-access-nn2rq" (OuterVolumeSpecName: "kube-api-access-nn2rq") pod "17df6d17-2383-4d6e-8f22-e29269ad9eef" (UID: "17df6d17-2383-4d6e-8f22-e29269ad9eef"). InnerVolumeSpecName "kube-api-access-nn2rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.360410 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn2rq\" (UniqueName: \"kubernetes.io/projected/17df6d17-2383-4d6e-8f22-e29269ad9eef-kube-api-access-nn2rq\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.667798 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wlldd" event={"ID":"d0d8decf-1b4d-447f-9a00-301cb0c4b716","Type":"ContainerStarted","Data":"1b4cf958de9267bf72fbbdd87885b74e2b328ebb70f6684aca4e0ab43886e5cf"} Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.667879 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wlldd" event={"ID":"d0d8decf-1b4d-447f-9a00-301cb0c4b716","Type":"ContainerStarted","Data":"06efc361df7db814f801775479fb8c738add8a37925632a1f96617a69a96e99a"} Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.667892 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wlldd" event={"ID":"d0d8decf-1b4d-447f-9a00-301cb0c4b716","Type":"ContainerStarted","Data":"5c648d4ed24d91845d04cdb371dc47470c97b13321b42fa6e1847e2ab7bea30a"} Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.667903 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wlldd" event={"ID":"d0d8decf-1b4d-447f-9a00-301cb0c4b716","Type":"ContainerStarted","Data":"dd942c01ba98bb97c94c2fffb149aba51ba6563f853b54dd00b979a41b234bf0"} Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.672329 4784 generic.go:334] "Generic (PLEG): container finished" podID="17df6d17-2383-4d6e-8f22-e29269ad9eef" containerID="fdd66be47103ef5d94a0bf7dc483c6bb17abb151b8762f9cd14528b220185cb8" exitCode=0 Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.672412 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sddxz" Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.672445 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sddxz" event={"ID":"17df6d17-2383-4d6e-8f22-e29269ad9eef","Type":"ContainerDied","Data":"fdd66be47103ef5d94a0bf7dc483c6bb17abb151b8762f9cd14528b220185cb8"} Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.672535 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sddxz" event={"ID":"17df6d17-2383-4d6e-8f22-e29269ad9eef","Type":"ContainerDied","Data":"a0e1d96ebb360aa958e0b707f401fb671511b77b280179030b0d2adf779c367b"} Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.672568 4784 scope.go:117] "RemoveContainer" containerID="fdd66be47103ef5d94a0bf7dc483c6bb17abb151b8762f9cd14528b220185cb8" Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.718740 4784 scope.go:117] "RemoveContainer" containerID="fdd66be47103ef5d94a0bf7dc483c6bb17abb151b8762f9cd14528b220185cb8" Jan 23 06:36:29 crc kubenswrapper[4784]: E0123 06:36:29.719637 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdd66be47103ef5d94a0bf7dc483c6bb17abb151b8762f9cd14528b220185cb8\": container with ID starting with fdd66be47103ef5d94a0bf7dc483c6bb17abb151b8762f9cd14528b220185cb8 not found: ID does not exist" containerID="fdd66be47103ef5d94a0bf7dc483c6bb17abb151b8762f9cd14528b220185cb8" Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.719690 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdd66be47103ef5d94a0bf7dc483c6bb17abb151b8762f9cd14528b220185cb8"} err="failed to get container status \"fdd66be47103ef5d94a0bf7dc483c6bb17abb151b8762f9cd14528b220185cb8\": rpc error: code = NotFound desc = could not find container \"fdd66be47103ef5d94a0bf7dc483c6bb17abb151b8762f9cd14528b220185cb8\": container with ID starting with fdd66be47103ef5d94a0bf7dc483c6bb17abb151b8762f9cd14528b220185cb8 not found: ID does not exist" Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.752432 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sddxz"] Jan 23 06:36:29 crc kubenswrapper[4784]: I0123 06:36:29.757299 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-sddxz"] Jan 23 06:36:30 crc kubenswrapper[4784]: I0123 06:36:30.687783 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wlldd" event={"ID":"d0d8decf-1b4d-447f-9a00-301cb0c4b716","Type":"ContainerStarted","Data":"9a5291c9328858473bd369278e2360dce2333e4521d687e70974d66547b6cc12"} Jan 23 06:36:30 crc kubenswrapper[4784]: I0123 06:36:30.687873 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wlldd" event={"ID":"d0d8decf-1b4d-447f-9a00-301cb0c4b716","Type":"ContainerStarted","Data":"8d24f08a03778600af5db5c55141d7acc11a5251a353149ba083908e2657f674"} Jan 23 06:36:30 crc kubenswrapper[4784]: I0123 06:36:30.687928 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-wlldd" Jan 23 06:36:30 crc kubenswrapper[4784]: I0123 06:36:30.724195 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-wlldd" podStartSLOduration=16.040326476 podStartE2EDuration="47.72417426s" podCreationTimestamp="2026-01-23 06:35:43 +0000 UTC" firstStartedPulling="2026-01-23 06:35:53.199237106 +0000 UTC m=+956.431745080" lastFinishedPulling="2026-01-23 06:36:24.8830849 +0000 UTC m=+988.115592864" observedRunningTime="2026-01-23 06:36:30.722015397 +0000 UTC m=+993.954523371" watchObservedRunningTime="2026-01-23 06:36:30.72417426 +0000 UTC m=+993.956682264" Jan 23 06:36:31 crc kubenswrapper[4784]: I0123 06:36:31.263714 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17df6d17-2383-4d6e-8f22-e29269ad9eef" path="/var/lib/kubelet/pods/17df6d17-2383-4d6e-8f22-e29269ad9eef/volumes" Jan 23 06:36:34 crc kubenswrapper[4784]: I0123 06:36:34.835259 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-wlldd" Jan 23 06:36:34 crc kubenswrapper[4784]: I0123 06:36:34.886930 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-wlldd" Jan 23 06:36:34 crc kubenswrapper[4784]: I0123 06:36:34.936944 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qn8wr" Jan 23 06:36:36 crc kubenswrapper[4784]: I0123 06:36:36.167080 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-q72fk" Jan 23 06:36:36 crc kubenswrapper[4784]: I0123 06:36:36.167580 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-q72fk" Jan 23 06:36:36 crc kubenswrapper[4784]: I0123 06:36:36.199602 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-q72fk" Jan 23 06:36:36 crc kubenswrapper[4784]: I0123 06:36:36.798942 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-q72fk" Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.584170 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp"] Jan 23 06:36:42 crc kubenswrapper[4784]: E0123 06:36:42.585442 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17df6d17-2383-4d6e-8f22-e29269ad9eef" containerName="registry-server" Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.585468 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="17df6d17-2383-4d6e-8f22-e29269ad9eef" containerName="registry-server" Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.585632 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="17df6d17-2383-4d6e-8f22-e29269ad9eef" containerName="registry-server" Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.587071 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.591197 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-v9s88" Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.607404 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp"] Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.706569 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c762q\" (UniqueName: \"kubernetes.io/projected/71dd6098-21e1-4844-bf38-85ff115f9157-kube-api-access-c762q\") pod \"8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp\" (UID: \"71dd6098-21e1-4844-bf38-85ff115f9157\") " pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.706633 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/71dd6098-21e1-4844-bf38-85ff115f9157-util\") pod \"8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp\" (UID: \"71dd6098-21e1-4844-bf38-85ff115f9157\") " pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.706669 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/71dd6098-21e1-4844-bf38-85ff115f9157-bundle\") pod \"8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp\" (UID: \"71dd6098-21e1-4844-bf38-85ff115f9157\") " pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.808372 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c762q\" (UniqueName: \"kubernetes.io/projected/71dd6098-21e1-4844-bf38-85ff115f9157-kube-api-access-c762q\") pod \"8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp\" (UID: \"71dd6098-21e1-4844-bf38-85ff115f9157\") " pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.808949 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/71dd6098-21e1-4844-bf38-85ff115f9157-util\") pod \"8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp\" (UID: \"71dd6098-21e1-4844-bf38-85ff115f9157\") " pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.809179 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/71dd6098-21e1-4844-bf38-85ff115f9157-bundle\") pod \"8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp\" (UID: \"71dd6098-21e1-4844-bf38-85ff115f9157\") " pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.809622 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/71dd6098-21e1-4844-bf38-85ff115f9157-bundle\") pod \"8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp\" (UID: \"71dd6098-21e1-4844-bf38-85ff115f9157\") " pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.809617 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/71dd6098-21e1-4844-bf38-85ff115f9157-util\") pod \"8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp\" (UID: \"71dd6098-21e1-4844-bf38-85ff115f9157\") " pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.835132 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c762q\" (UniqueName: \"kubernetes.io/projected/71dd6098-21e1-4844-bf38-85ff115f9157-kube-api-access-c762q\") pod \"8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp\" (UID: \"71dd6098-21e1-4844-bf38-85ff115f9157\") " pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" Jan 23 06:36:42 crc kubenswrapper[4784]: I0123 06:36:42.910688 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" Jan 23 06:36:43 crc kubenswrapper[4784]: I0123 06:36:43.292564 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp"] Jan 23 06:36:43 crc kubenswrapper[4784]: I0123 06:36:43.804358 4784 generic.go:334] "Generic (PLEG): container finished" podID="71dd6098-21e1-4844-bf38-85ff115f9157" containerID="055c76d2cbff87150faec2b9fd1557336b2521bb95d55e4fd0c8cbef002c265f" exitCode=0 Jan 23 06:36:43 crc kubenswrapper[4784]: I0123 06:36:43.804417 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" event={"ID":"71dd6098-21e1-4844-bf38-85ff115f9157","Type":"ContainerDied","Data":"055c76d2cbff87150faec2b9fd1557336b2521bb95d55e4fd0c8cbef002c265f"} Jan 23 06:36:43 crc kubenswrapper[4784]: I0123 06:36:43.804455 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" event={"ID":"71dd6098-21e1-4844-bf38-85ff115f9157","Type":"ContainerStarted","Data":"535d2bd7e058e64f053537c18a53a50eb49afd71f89644e09bb616b33fa485bd"} Jan 23 06:36:44 crc kubenswrapper[4784]: I0123 06:36:44.816517 4784 generic.go:334] "Generic (PLEG): container finished" podID="71dd6098-21e1-4844-bf38-85ff115f9157" containerID="7623aa571ebee359f7a8613f48b0cc2b86e2eefb8f99c0683f9819e1b0f57bb1" exitCode=0 Jan 23 06:36:44 crc kubenswrapper[4784]: I0123 06:36:44.816784 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" event={"ID":"71dd6098-21e1-4844-bf38-85ff115f9157","Type":"ContainerDied","Data":"7623aa571ebee359f7a8613f48b0cc2b86e2eefb8f99c0683f9819e1b0f57bb1"} Jan 23 06:36:44 crc kubenswrapper[4784]: I0123 06:36:44.840555 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-wlldd" Jan 23 06:36:45 crc kubenswrapper[4784]: I0123 06:36:45.846914 4784 generic.go:334] "Generic (PLEG): container finished" podID="71dd6098-21e1-4844-bf38-85ff115f9157" containerID="624982795cefd75bfe587ee5e4f6d49cc37383dc8e555e64aa1b54bf1d41862a" exitCode=0 Jan 23 06:36:45 crc kubenswrapper[4784]: I0123 06:36:45.846996 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" event={"ID":"71dd6098-21e1-4844-bf38-85ff115f9157","Type":"ContainerDied","Data":"624982795cefd75bfe587ee5e4f6d49cc37383dc8e555e64aa1b54bf1d41862a"} Jan 23 06:36:47 crc kubenswrapper[4784]: I0123 06:36:47.219474 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" Jan 23 06:36:47 crc kubenswrapper[4784]: I0123 06:36:47.287907 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/71dd6098-21e1-4844-bf38-85ff115f9157-util\") pod \"71dd6098-21e1-4844-bf38-85ff115f9157\" (UID: \"71dd6098-21e1-4844-bf38-85ff115f9157\") " Jan 23 06:36:47 crc kubenswrapper[4784]: I0123 06:36:47.302385 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71dd6098-21e1-4844-bf38-85ff115f9157-util" (OuterVolumeSpecName: "util") pod "71dd6098-21e1-4844-bf38-85ff115f9157" (UID: "71dd6098-21e1-4844-bf38-85ff115f9157"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:36:47 crc kubenswrapper[4784]: I0123 06:36:47.389131 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/71dd6098-21e1-4844-bf38-85ff115f9157-bundle\") pod \"71dd6098-21e1-4844-bf38-85ff115f9157\" (UID: \"71dd6098-21e1-4844-bf38-85ff115f9157\") " Jan 23 06:36:47 crc kubenswrapper[4784]: I0123 06:36:47.389213 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c762q\" (UniqueName: \"kubernetes.io/projected/71dd6098-21e1-4844-bf38-85ff115f9157-kube-api-access-c762q\") pod \"71dd6098-21e1-4844-bf38-85ff115f9157\" (UID: \"71dd6098-21e1-4844-bf38-85ff115f9157\") " Jan 23 06:36:47 crc kubenswrapper[4784]: I0123 06:36:47.389504 4784 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/71dd6098-21e1-4844-bf38-85ff115f9157-util\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:47 crc kubenswrapper[4784]: I0123 06:36:47.390022 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71dd6098-21e1-4844-bf38-85ff115f9157-bundle" (OuterVolumeSpecName: "bundle") pod "71dd6098-21e1-4844-bf38-85ff115f9157" (UID: "71dd6098-21e1-4844-bf38-85ff115f9157"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:36:47 crc kubenswrapper[4784]: I0123 06:36:47.396207 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71dd6098-21e1-4844-bf38-85ff115f9157-kube-api-access-c762q" (OuterVolumeSpecName: "kube-api-access-c762q") pod "71dd6098-21e1-4844-bf38-85ff115f9157" (UID: "71dd6098-21e1-4844-bf38-85ff115f9157"). InnerVolumeSpecName "kube-api-access-c762q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:36:47 crc kubenswrapper[4784]: I0123 06:36:47.490395 4784 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/71dd6098-21e1-4844-bf38-85ff115f9157-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:47 crc kubenswrapper[4784]: I0123 06:36:47.490449 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c762q\" (UniqueName: \"kubernetes.io/projected/71dd6098-21e1-4844-bf38-85ff115f9157-kube-api-access-c762q\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:47 crc kubenswrapper[4784]: I0123 06:36:47.865140 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" event={"ID":"71dd6098-21e1-4844-bf38-85ff115f9157","Type":"ContainerDied","Data":"535d2bd7e058e64f053537c18a53a50eb49afd71f89644e09bb616b33fa485bd"} Jan 23 06:36:47 crc kubenswrapper[4784]: I0123 06:36:47.865204 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="535d2bd7e058e64f053537c18a53a50eb49afd71f89644e09bb616b33fa485bd" Jan 23 06:36:47 crc kubenswrapper[4784]: I0123 06:36:47.865235 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp" Jan 23 06:36:54 crc kubenswrapper[4784]: I0123 06:36:54.933480 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc"] Jan 23 06:36:54 crc kubenswrapper[4784]: E0123 06:36:54.934409 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71dd6098-21e1-4844-bf38-85ff115f9157" containerName="pull" Jan 23 06:36:54 crc kubenswrapper[4784]: I0123 06:36:54.934423 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="71dd6098-21e1-4844-bf38-85ff115f9157" containerName="pull" Jan 23 06:36:54 crc kubenswrapper[4784]: E0123 06:36:54.934433 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71dd6098-21e1-4844-bf38-85ff115f9157" containerName="util" Jan 23 06:36:54 crc kubenswrapper[4784]: I0123 06:36:54.934439 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="71dd6098-21e1-4844-bf38-85ff115f9157" containerName="util" Jan 23 06:36:54 crc kubenswrapper[4784]: E0123 06:36:54.934463 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71dd6098-21e1-4844-bf38-85ff115f9157" containerName="extract" Jan 23 06:36:54 crc kubenswrapper[4784]: I0123 06:36:54.934469 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="71dd6098-21e1-4844-bf38-85ff115f9157" containerName="extract" Jan 23 06:36:54 crc kubenswrapper[4784]: I0123 06:36:54.934583 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="71dd6098-21e1-4844-bf38-85ff115f9157" containerName="extract" Jan 23 06:36:54 crc kubenswrapper[4784]: I0123 06:36:54.935118 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" Jan 23 06:36:54 crc kubenswrapper[4784]: I0123 06:36:54.940454 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-grzdv" Jan 23 06:36:54 crc kubenswrapper[4784]: I0123 06:36:54.956540 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc"] Jan 23 06:36:55 crc kubenswrapper[4784]: I0123 06:36:55.021127 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnljj\" (UniqueName: \"kubernetes.io/projected/be839066-996a-463b-b96c-a340d4e55ffd-kube-api-access-tnljj\") pod \"openstack-operator-controller-init-7c664964d9-t6kpc\" (UID: \"be839066-996a-463b-b96c-a340d4e55ffd\") " pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" Jan 23 06:36:55 crc kubenswrapper[4784]: I0123 06:36:55.123104 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnljj\" (UniqueName: \"kubernetes.io/projected/be839066-996a-463b-b96c-a340d4e55ffd-kube-api-access-tnljj\") pod \"openstack-operator-controller-init-7c664964d9-t6kpc\" (UID: \"be839066-996a-463b-b96c-a340d4e55ffd\") " pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" Jan 23 06:36:55 crc kubenswrapper[4784]: I0123 06:36:55.148249 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnljj\" (UniqueName: \"kubernetes.io/projected/be839066-996a-463b-b96c-a340d4e55ffd-kube-api-access-tnljj\") pod \"openstack-operator-controller-init-7c664964d9-t6kpc\" (UID: \"be839066-996a-463b-b96c-a340d4e55ffd\") " pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" Jan 23 06:36:55 crc kubenswrapper[4784]: I0123 06:36:55.257920 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" Jan 23 06:36:55 crc kubenswrapper[4784]: I0123 06:36:55.521207 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc"] Jan 23 06:36:55 crc kubenswrapper[4784]: I0123 06:36:55.534526 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 06:36:55 crc kubenswrapper[4784]: I0123 06:36:55.922797 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" event={"ID":"be839066-996a-463b-b96c-a340d4e55ffd","Type":"ContainerStarted","Data":"54e9c44e112d13ae8e3a4b072e98a5252680137da04bae58312b9cd009463980"} Jan 23 06:37:00 crc kubenswrapper[4784]: I0123 06:37:00.972181 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" event={"ID":"be839066-996a-463b-b96c-a340d4e55ffd","Type":"ContainerStarted","Data":"6e0e6fe8dc45648f8e69794732f52cff84e5bd78ccadb7a41c14572ae7e31bca"} Jan 23 06:37:00 crc kubenswrapper[4784]: I0123 06:37:00.973327 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" Jan 23 06:37:01 crc kubenswrapper[4784]: I0123 06:37:01.009403 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" podStartSLOduration=2.300993426 podStartE2EDuration="7.009369063s" podCreationTimestamp="2026-01-23 06:36:54 +0000 UTC" firstStartedPulling="2026-01-23 06:36:55.534257045 +0000 UTC m=+1018.766765019" lastFinishedPulling="2026-01-23 06:37:00.242632672 +0000 UTC m=+1023.475140656" observedRunningTime="2026-01-23 06:37:01.003009077 +0000 UTC m=+1024.235517101" watchObservedRunningTime="2026-01-23 06:37:01.009369063 +0000 UTC m=+1024.241877037" Jan 23 06:37:05 crc kubenswrapper[4784]: I0123 06:37:05.268204 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.746562 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8"] Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.748720 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.751606 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-bbpbj" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.778073 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8"] Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.786475 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5"] Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.787895 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.793641 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk"] Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.794917 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.802330 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-6rpzn" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.802495 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ww6sb" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.813740 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7ns7\" (UniqueName: \"kubernetes.io/projected/0e01c35c-c9bd-4b02-adb1-be49a504ea54-kube-api-access-h7ns7\") pod \"barbican-operator-controller-manager-7f86f8796f-q7sn8\" (UID: \"0e01c35c-c9bd-4b02-adb1-be49a504ea54\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.819669 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5"] Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.830265 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk"] Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.851361 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb"] Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.852974 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.855353 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-zgmf9" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.859158 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb"] Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.868720 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn"] Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.870167 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.875533 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8mxcw" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.891347 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf"] Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.893537 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.901863 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-87988" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.904778 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn"] Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.916124 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7ns7\" (UniqueName: \"kubernetes.io/projected/0e01c35c-c9bd-4b02-adb1-be49a504ea54-kube-api-access-h7ns7\") pod \"barbican-operator-controller-manager-7f86f8796f-q7sn8\" (UID: \"0e01c35c-c9bd-4b02-adb1-be49a504ea54\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.916180 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcqjw\" (UniqueName: \"kubernetes.io/projected/f54aca80-78ad-4bda-905c-0a519a4f33ed-kube-api-access-dcqjw\") pod \"glance-operator-controller-manager-78fdd796fd-nb6tb\" (UID: \"f54aca80-78ad-4bda-905c-0a519a4f33ed\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.916265 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f5dj\" (UniqueName: \"kubernetes.io/projected/417f228a-38b7-448a-980d-f64d6e113646-kube-api-access-2f5dj\") pod \"heat-operator-controller-manager-594c8c9d5d-hcqtn\" (UID: \"417f228a-38b7-448a-980d-f64d6e113646\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.916286 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhsws\" (UniqueName: \"kubernetes.io/projected/7c5e978b-ac3c-439e-b2b1-ab025c130984-kube-api-access-zhsws\") pod \"cinder-operator-controller-manager-69cf5d4557-kl6d5\" (UID: \"7c5e978b-ac3c-439e-b2b1-ab025c130984\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.916325 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqp78\" (UniqueName: \"kubernetes.io/projected/55f3492a-a5c0-460b-a93b-eb680b426a7c-kube-api-access-xqp78\") pod \"designate-operator-controller-manager-b45d7bf98-zkswk\" (UID: \"55f3492a-a5c0-460b-a93b-eb680b426a7c\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.950975 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk"] Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.952137 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.956188 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-v96ck" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.956207 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.962319 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf"] Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.983402 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk"] Jan 23 06:37:24 crc kubenswrapper[4784]: I0123 06:37:24.992509 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7ns7\" (UniqueName: \"kubernetes.io/projected/0e01c35c-c9bd-4b02-adb1-be49a504ea54-kube-api-access-h7ns7\") pod \"barbican-operator-controller-manager-7f86f8796f-q7sn8\" (UID: \"0e01c35c-c9bd-4b02-adb1-be49a504ea54\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.003352 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.004551 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.011846 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-mq668" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.017734 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f5dj\" (UniqueName: \"kubernetes.io/projected/417f228a-38b7-448a-980d-f64d6e113646-kube-api-access-2f5dj\") pod \"heat-operator-controller-manager-594c8c9d5d-hcqtn\" (UID: \"417f228a-38b7-448a-980d-f64d6e113646\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.017808 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhsws\" (UniqueName: \"kubernetes.io/projected/7c5e978b-ac3c-439e-b2b1-ab025c130984-kube-api-access-zhsws\") pod \"cinder-operator-controller-manager-69cf5d4557-kl6d5\" (UID: \"7c5e978b-ac3c-439e-b2b1-ab025c130984\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.017838 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxbsw\" (UniqueName: \"kubernetes.io/projected/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-kube-api-access-lxbsw\") pod \"infra-operator-controller-manager-58749ffdfb-hl8gk\" (UID: \"758913f1-9ef1-4fe9-9d5f-2cb794fcddef\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.017870 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqp78\" (UniqueName: \"kubernetes.io/projected/55f3492a-a5c0-460b-a93b-eb680b426a7c-kube-api-access-xqp78\") pod \"designate-operator-controller-manager-b45d7bf98-zkswk\" (UID: \"55f3492a-a5c0-460b-a93b-eb680b426a7c\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.017903 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert\") pod \"infra-operator-controller-manager-58749ffdfb-hl8gk\" (UID: \"758913f1-9ef1-4fe9-9d5f-2cb794fcddef\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.017937 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcqjw\" (UniqueName: \"kubernetes.io/projected/f54aca80-78ad-4bda-905c-0a519a4f33ed-kube-api-access-dcqjw\") pod \"glance-operator-controller-manager-78fdd796fd-nb6tb\" (UID: \"f54aca80-78ad-4bda-905c-0a519a4f33ed\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.017974 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csfd\" (UniqueName: \"kubernetes.io/projected/4fa12cd4-f2bc-4863-8b67-e246a0becee3-kube-api-access-6csfd\") pod \"horizon-operator-controller-manager-77d5c5b54f-lvmlf\" (UID: \"4fa12cd4-f2bc-4863-8b67-e246a0becee3\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.038137 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.043936 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.045192 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.051725 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.053608 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-5rs2t" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.055803 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.061454 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-jw56c" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.066623 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcqjw\" (UniqueName: \"kubernetes.io/projected/f54aca80-78ad-4bda-905c-0a519a4f33ed-kube-api-access-dcqjw\") pod \"glance-operator-controller-manager-78fdd796fd-nb6tb\" (UID: \"f54aca80-78ad-4bda-905c-0a519a4f33ed\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.072582 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f5dj\" (UniqueName: \"kubernetes.io/projected/417f228a-38b7-448a-980d-f64d6e113646-kube-api-access-2f5dj\") pod \"heat-operator-controller-manager-594c8c9d5d-hcqtn\" (UID: \"417f228a-38b7-448a-980d-f64d6e113646\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.072678 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.073794 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.074429 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.075897 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-cqtxn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.091628 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.093133 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.106687 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqp78\" (UniqueName: \"kubernetes.io/projected/55f3492a-a5c0-460b-a93b-eb680b426a7c-kube-api-access-xqp78\") pod \"designate-operator-controller-manager-b45d7bf98-zkswk\" (UID: \"55f3492a-a5c0-460b-a93b-eb680b426a7c\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.122683 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xfr8p" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.123879 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5sz7\" (UniqueName: \"kubernetes.io/projected/138e85ae-26a7-45f3-ac25-61ece9cf8573-kube-api-access-m5sz7\") pod \"manila-operator-controller-manager-78c6999f6f-wzjzl\" (UID: \"138e85ae-26a7-45f3-ac25-61ece9cf8573\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.126255 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhsws\" (UniqueName: \"kubernetes.io/projected/7c5e978b-ac3c-439e-b2b1-ab025c130984-kube-api-access-zhsws\") pod \"cinder-operator-controller-manager-69cf5d4557-kl6d5\" (UID: \"7c5e978b-ac3c-439e-b2b1-ab025c130984\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.126393 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert\") pod \"infra-operator-controller-manager-58749ffdfb-hl8gk\" (UID: \"758913f1-9ef1-4fe9-9d5f-2cb794fcddef\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.126639 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6csfd\" (UniqueName: \"kubernetes.io/projected/4fa12cd4-f2bc-4863-8b67-e246a0becee3-kube-api-access-6csfd\") pod \"horizon-operator-controller-manager-77d5c5b54f-lvmlf\" (UID: \"4fa12cd4-f2bc-4863-8b67-e246a0becee3\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.126765 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pngl6\" (UniqueName: \"kubernetes.io/projected/89f228f9-5c69-4e48-bf35-01cc25b56ecd-kube-api-access-pngl6\") pod \"ironic-operator-controller-manager-598f7747c9-2vptn\" (UID: \"89f228f9-5c69-4e48-bf35-01cc25b56ecd\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.126848 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2pbt\" (UniqueName: \"kubernetes.io/projected/1cd86a7e-7738-4a67-9c19-d34a70dbc9fe-kube-api-access-r2pbt\") pod \"keystone-operator-controller-manager-b8b6d4659-7znp2\" (UID: \"1cd86a7e-7738-4a67-9c19-d34a70dbc9fe\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.126946 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7r79\" (UniqueName: \"kubernetes.io/projected/f809f5f2-7409-4d7e-b938-1efc34dc4c2f-kube-api-access-n7r79\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7\" (UID: \"f809f5f2-7409-4d7e-b938-1efc34dc4c2f\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.127126 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxbsw\" (UniqueName: \"kubernetes.io/projected/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-kube-api-access-lxbsw\") pod \"infra-operator-controller-manager-58749ffdfb-hl8gk\" (UID: \"758913f1-9ef1-4fe9-9d5f-2cb794fcddef\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 06:37:25 crc kubenswrapper[4784]: E0123 06:37:25.128332 4784 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 06:37:25 crc kubenswrapper[4784]: E0123 06:37:25.128495 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert podName:758913f1-9ef1-4fe9-9d5f-2cb794fcddef nodeName:}" failed. No retries permitted until 2026-01-23 06:37:25.628465679 +0000 UTC m=+1048.860973653 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert") pod "infra-operator-controller-manager-58749ffdfb-hl8gk" (UID: "758913f1-9ef1-4fe9-9d5f-2cb794fcddef") : secret "infra-operator-webhook-server-cert" not found Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.141934 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.163844 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxbsw\" (UniqueName: \"kubernetes.io/projected/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-kube-api-access-lxbsw\") pod \"infra-operator-controller-manager-58749ffdfb-hl8gk\" (UID: \"758913f1-9ef1-4fe9-9d5f-2cb794fcddef\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.182419 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6csfd\" (UniqueName: \"kubernetes.io/projected/4fa12cd4-f2bc-4863-8b67-e246a0becee3-kube-api-access-6csfd\") pod \"horizon-operator-controller-manager-77d5c5b54f-lvmlf\" (UID: \"4fa12cd4-f2bc-4863-8b67-e246a0becee3\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.182876 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.192650 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.197791 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.217686 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.230366 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.234492 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6bwc\" (UniqueName: \"kubernetes.io/projected/be79eaa0-8040-4009-9f16-fcb56bffbff7-kube-api-access-q6bwc\") pod \"neutron-operator-controller-manager-78d58447c5-krp8w\" (UID: \"be79eaa0-8040-4009-9f16-fcb56bffbff7\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.241042 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5sz7\" (UniqueName: \"kubernetes.io/projected/138e85ae-26a7-45f3-ac25-61ece9cf8573-kube-api-access-m5sz7\") pod \"manila-operator-controller-manager-78c6999f6f-wzjzl\" (UID: \"138e85ae-26a7-45f3-ac25-61ece9cf8573\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.241432 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pngl6\" (UniqueName: \"kubernetes.io/projected/89f228f9-5c69-4e48-bf35-01cc25b56ecd-kube-api-access-pngl6\") pod \"ironic-operator-controller-manager-598f7747c9-2vptn\" (UID: \"89f228f9-5c69-4e48-bf35-01cc25b56ecd\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.241521 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2pbt\" (UniqueName: \"kubernetes.io/projected/1cd86a7e-7738-4a67-9c19-d34a70dbc9fe-kube-api-access-r2pbt\") pod \"keystone-operator-controller-manager-b8b6d4659-7znp2\" (UID: \"1cd86a7e-7738-4a67-9c19-d34a70dbc9fe\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.241618 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7r79\" (UniqueName: \"kubernetes.io/projected/f809f5f2-7409-4d7e-b938-1efc34dc4c2f-kube-api-access-n7r79\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7\" (UID: \"f809f5f2-7409-4d7e-b938-1efc34dc4c2f\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.241907 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.236044 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.243225 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.257685 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-sz5hs" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.258088 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.272437 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5sz7\" (UniqueName: \"kubernetes.io/projected/138e85ae-26a7-45f3-ac25-61ece9cf8573-kube-api-access-m5sz7\") pod \"manila-operator-controller-manager-78c6999f6f-wzjzl\" (UID: \"138e85ae-26a7-45f3-ac25-61ece9cf8573\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.272491 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pngl6\" (UniqueName: \"kubernetes.io/projected/89f228f9-5c69-4e48-bf35-01cc25b56ecd-kube-api-access-pngl6\") pod \"ironic-operator-controller-manager-598f7747c9-2vptn\" (UID: \"89f228f9-5c69-4e48-bf35-01cc25b56ecd\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.272975 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.273955 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.277608 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7r79\" (UniqueName: \"kubernetes.io/projected/f809f5f2-7409-4d7e-b938-1efc34dc4c2f-kube-api-access-n7r79\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7\" (UID: \"f809f5f2-7409-4d7e-b938-1efc34dc4c2f\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.285447 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.286244 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-jmh7c" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.287510 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.295704 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.299085 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2pbt\" (UniqueName: \"kubernetes.io/projected/1cd86a7e-7738-4a67-9c19-d34a70dbc9fe-kube-api-access-r2pbt\") pod \"keystone-operator-controller-manager-b8b6d4659-7znp2\" (UID: \"1cd86a7e-7738-4a67-9c19-d34a70dbc9fe\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.313905 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.315164 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.320523 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.322819 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cjkpn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.323030 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.346926 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfdk6\" (UniqueName: \"kubernetes.io/projected/89a376c8-b238-445d-99da-b85f3c421125-kube-api-access-hfdk6\") pod \"octavia-operator-controller-manager-7bd9774b6-jqhrt\" (UID: \"89a376c8-b238-445d-99da-b85f3c421125\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.347123 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkqrp\" (UniqueName: \"kubernetes.io/projected/9bc11b97-7610-4c0f-898a-bb42b42c37d7-kube-api-access-jkqrp\") pod \"nova-operator-controller-manager-6b8bc8d87d-82hzn\" (UID: \"9bc11b97-7610-4c0f-898a-bb42b42c37d7\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.347257 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6bwc\" (UniqueName: \"kubernetes.io/projected/be79eaa0-8040-4009-9f16-fcb56bffbff7-kube-api-access-q6bwc\") pod \"neutron-operator-controller-manager-78d58447c5-krp8w\" (UID: \"be79eaa0-8040-4009-9f16-fcb56bffbff7\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.348993 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.365300 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.366438 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.373510 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-nknw8" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.379394 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.391663 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.392214 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6bwc\" (UniqueName: \"kubernetes.io/projected/be79eaa0-8040-4009-9f16-fcb56bffbff7-kube-api-access-q6bwc\") pod \"neutron-operator-controller-manager-78d58447c5-krp8w\" (UID: \"be79eaa0-8040-4009-9f16-fcb56bffbff7\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.393852 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.396853 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4jjdb" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.411224 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.414359 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.418263 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.420250 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4d6s8" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.429373 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.447065 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.451780 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.451911 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkqrp\" (UniqueName: \"kubernetes.io/projected/9bc11b97-7610-4c0f-898a-bb42b42c37d7-kube-api-access-jkqrp\") pod \"nova-operator-controller-manager-6b8bc8d87d-82hzn\" (UID: \"9bc11b97-7610-4c0f-898a-bb42b42c37d7\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.452036 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zmpm\" (UniqueName: \"kubernetes.io/projected/2c2a2d81-11ef-4146-ad50-8f7f39163253-kube-api-access-8zmpm\") pod \"placement-operator-controller-manager-5d646b7d76-c2btv\" (UID: \"2c2a2d81-11ef-4146-ad50-8f7f39163253\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.452124 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh4gb\" (UniqueName: \"kubernetes.io/projected/500659da-123f-4500-9c50-2b7b3b7656df-kube-api-access-lh4gb\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6\" (UID: \"500659da-123f-4500-9c50-2b7b3b7656df\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.452162 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6\" (UID: \"500659da-123f-4500-9c50-2b7b3b7656df\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.452191 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkf55\" (UniqueName: \"kubernetes.io/projected/e8de3214-d1e9-4800-9ace-51a85b326df8-kube-api-access-dkf55\") pod \"ovn-operator-controller-manager-55db956ddc-2wrsg\" (UID: \"e8de3214-d1e9-4800-9ace-51a85b326df8\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.452246 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfdk6\" (UniqueName: \"kubernetes.io/projected/89a376c8-b238-445d-99da-b85f3c421125-kube-api-access-hfdk6\") pod \"octavia-operator-controller-manager-7bd9774b6-jqhrt\" (UID: \"89a376c8-b238-445d-99da-b85f3c421125\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.457262 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.480572 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-kt9h4" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.483424 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.484981 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.504947 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-94f28"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.506255 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.509687 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-wcr7p" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.513844 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfdk6\" (UniqueName: \"kubernetes.io/projected/89a376c8-b238-445d-99da-b85f3c421125-kube-api-access-hfdk6\") pod \"octavia-operator-controller-manager-7bd9774b6-jqhrt\" (UID: \"89a376c8-b238-445d-99da-b85f3c421125\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.539196 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-94f28"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.553510 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqprp\" (UniqueName: \"kubernetes.io/projected/d6c01b10-21b9-4e8b-b051-6f148f468828-kube-api-access-pqprp\") pod \"swift-operator-controller-manager-547cbdb99f-gbncb\" (UID: \"d6c01b10-21b9-4e8b-b051-6f148f468828\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.553605 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zmpm\" (UniqueName: \"kubernetes.io/projected/2c2a2d81-11ef-4146-ad50-8f7f39163253-kube-api-access-8zmpm\") pod \"placement-operator-controller-manager-5d646b7d76-c2btv\" (UID: \"2c2a2d81-11ef-4146-ad50-8f7f39163253\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.553648 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh4gb\" (UniqueName: \"kubernetes.io/projected/500659da-123f-4500-9c50-2b7b3b7656df-kube-api-access-lh4gb\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6\" (UID: \"500659da-123f-4500-9c50-2b7b3b7656df\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.553673 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t5hc\" (UniqueName: \"kubernetes.io/projected/3a006f0b-6298-4509-9533-178b38906875-kube-api-access-6t5hc\") pod \"telemetry-operator-controller-manager-85cd9769bb-c2zh7\" (UID: \"3a006f0b-6298-4509-9533-178b38906875\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.553694 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6\" (UID: \"500659da-123f-4500-9c50-2b7b3b7656df\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.553719 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkf55\" (UniqueName: \"kubernetes.io/projected/e8de3214-d1e9-4800-9ace-51a85b326df8-kube-api-access-dkf55\") pod \"ovn-operator-controller-manager-55db956ddc-2wrsg\" (UID: \"e8de3214-d1e9-4800-9ace-51a85b326df8\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.553765 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m8tj\" (UniqueName: \"kubernetes.io/projected/3b13bce8-a43d-4833-9472-81f048a95be3-kube-api-access-9m8tj\") pod \"test-operator-controller-manager-69797bbcbd-94f28\" (UID: \"3b13bce8-a43d-4833-9472-81f048a95be3\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" Jan 23 06:37:25 crc kubenswrapper[4784]: E0123 06:37:25.554404 4784 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:37:25 crc kubenswrapper[4784]: E0123 06:37:25.554460 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert podName:500659da-123f-4500-9c50-2b7b3b7656df nodeName:}" failed. No retries permitted until 2026-01-23 06:37:26.054442408 +0000 UTC m=+1049.286950382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" (UID: "500659da-123f-4500-9c50-2b7b3b7656df") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.558946 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.580348 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.581631 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.600579 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-z8v8f" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.600676 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.602639 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkqrp\" (UniqueName: \"kubernetes.io/projected/9bc11b97-7610-4c0f-898a-bb42b42c37d7-kube-api-access-jkqrp\") pod \"nova-operator-controller-manager-6b8bc8d87d-82hzn\" (UID: \"9bc11b97-7610-4c0f-898a-bb42b42c37d7\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.603091 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.624024 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh4gb\" (UniqueName: \"kubernetes.io/projected/500659da-123f-4500-9c50-2b7b3b7656df-kube-api-access-lh4gb\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6\" (UID: \"500659da-123f-4500-9c50-2b7b3b7656df\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.625269 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zmpm\" (UniqueName: \"kubernetes.io/projected/2c2a2d81-11ef-4146-ad50-8f7f39163253-kube-api-access-8zmpm\") pod \"placement-operator-controller-manager-5d646b7d76-c2btv\" (UID: \"2c2a2d81-11ef-4146-ad50-8f7f39163253\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.636859 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkf55\" (UniqueName: \"kubernetes.io/projected/e8de3214-d1e9-4800-9ace-51a85b326df8-kube-api-access-dkf55\") pod \"ovn-operator-controller-manager-55db956ddc-2wrsg\" (UID: \"e8de3214-d1e9-4800-9ace-51a85b326df8\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.656634 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.658681 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqprp\" (UniqueName: \"kubernetes.io/projected/d6c01b10-21b9-4e8b-b051-6f148f468828-kube-api-access-pqprp\") pod \"swift-operator-controller-manager-547cbdb99f-gbncb\" (UID: \"d6c01b10-21b9-4e8b-b051-6f148f468828\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.658721 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfm48\" (UniqueName: \"kubernetes.io/projected/2e269fdb-0502-4d62-9a0d-15094fdd942c-kube-api-access-hfm48\") pod \"watcher-operator-controller-manager-5b5d4f4b97-64mxt\" (UID: \"2e269fdb-0502-4d62-9a0d-15094fdd942c\") " pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.658801 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert\") pod \"infra-operator-controller-manager-58749ffdfb-hl8gk\" (UID: \"758913f1-9ef1-4fe9-9d5f-2cb794fcddef\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.658866 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t5hc\" (UniqueName: \"kubernetes.io/projected/3a006f0b-6298-4509-9533-178b38906875-kube-api-access-6t5hc\") pod \"telemetry-operator-controller-manager-85cd9769bb-c2zh7\" (UID: \"3a006f0b-6298-4509-9533-178b38906875\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.658908 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m8tj\" (UniqueName: \"kubernetes.io/projected/3b13bce8-a43d-4833-9472-81f048a95be3-kube-api-access-9m8tj\") pod \"test-operator-controller-manager-69797bbcbd-94f28\" (UID: \"3b13bce8-a43d-4833-9472-81f048a95be3\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" Jan 23 06:37:25 crc kubenswrapper[4784]: E0123 06:37:25.659673 4784 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 06:37:25 crc kubenswrapper[4784]: E0123 06:37:25.659814 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert podName:758913f1-9ef1-4fe9-9d5f-2cb794fcddef nodeName:}" failed. No retries permitted until 2026-01-23 06:37:26.659775166 +0000 UTC m=+1049.892283140 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert") pod "infra-operator-controller-manager-58749ffdfb-hl8gk" (UID: "758913f1-9ef1-4fe9-9d5f-2cb794fcddef") : secret "infra-operator-webhook-server-cert" not found Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.677270 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.713063 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t5hc\" (UniqueName: \"kubernetes.io/projected/3a006f0b-6298-4509-9533-178b38906875-kube-api-access-6t5hc\") pod \"telemetry-operator-controller-manager-85cd9769bb-c2zh7\" (UID: \"3a006f0b-6298-4509-9533-178b38906875\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.714927 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.754598 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.760795 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfm48\" (UniqueName: \"kubernetes.io/projected/2e269fdb-0502-4d62-9a0d-15094fdd942c-kube-api-access-hfm48\") pod \"watcher-operator-controller-manager-5b5d4f4b97-64mxt\" (UID: \"2e269fdb-0502-4d62-9a0d-15094fdd942c\") " pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.765628 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqprp\" (UniqueName: \"kubernetes.io/projected/d6c01b10-21b9-4e8b-b051-6f148f468828-kube-api-access-pqprp\") pod \"swift-operator-controller-manager-547cbdb99f-gbncb\" (UID: \"d6c01b10-21b9-4e8b-b051-6f148f468828\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.827625 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.882028 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m8tj\" (UniqueName: \"kubernetes.io/projected/3b13bce8-a43d-4833-9472-81f048a95be3-kube-api-access-9m8tj\") pod \"test-operator-controller-manager-69797bbcbd-94f28\" (UID: \"3b13bce8-a43d-4833-9472-81f048a95be3\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.901886 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfm48\" (UniqueName: \"kubernetes.io/projected/2e269fdb-0502-4d62-9a0d-15094fdd942c-kube-api-access-hfm48\") pod \"watcher-operator-controller-manager-5b5d4f4b97-64mxt\" (UID: \"2e269fdb-0502-4d62-9a0d-15094fdd942c\") " pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.908351 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.913642 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.929919 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.930306 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9kq6t" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.931772 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.933824 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5dsxt"] Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.936468 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5dsxt" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.938768 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-69ch4" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.944494 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.991201 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.991279 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg5mg\" (UniqueName: \"kubernetes.io/projected/409eb30c-947e-4d15-9b7c-8a73ba35ad70-kube-api-access-fg5mg\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:25 crc kubenswrapper[4784]: I0123 06:37:25.991390 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.007053 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn"] Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.019462 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5dsxt"] Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.032164 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8"] Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.054325 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.094797 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.095238 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p827t\" (UniqueName: \"kubernetes.io/projected/80f7466e-7d6a-4416-9259-c30d69ee725e-kube-api-access-p827t\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5dsxt\" (UID: \"80f7466e-7d6a-4416-9259-c30d69ee725e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5dsxt" Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.095288 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6\" (UID: \"500659da-123f-4500-9c50-2b7b3b7656df\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.095351 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.095396 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg5mg\" (UniqueName: \"kubernetes.io/projected/409eb30c-947e-4d15-9b7c-8a73ba35ad70-kube-api-access-fg5mg\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:26 crc kubenswrapper[4784]: E0123 06:37:26.096063 4784 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 06:37:26 crc kubenswrapper[4784]: E0123 06:37:26.096143 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs podName:409eb30c-947e-4d15-9b7c-8a73ba35ad70 nodeName:}" failed. No retries permitted until 2026-01-23 06:37:26.596122167 +0000 UTC m=+1049.828630141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs") pod "openstack-operator-controller-manager-7cccd889d5-jxhkn" (UID: "409eb30c-947e-4d15-9b7c-8a73ba35ad70") : secret "metrics-server-cert" not found Jan 23 06:37:26 crc kubenswrapper[4784]: E0123 06:37:26.096353 4784 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:37:26 crc kubenswrapper[4784]: E0123 06:37:26.096380 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert podName:500659da-123f-4500-9c50-2b7b3b7656df nodeName:}" failed. No retries permitted until 2026-01-23 06:37:27.096372284 +0000 UTC m=+1050.328880258 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" (UID: "500659da-123f-4500-9c50-2b7b3b7656df") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:37:26 crc kubenswrapper[4784]: E0123 06:37:26.096416 4784 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 06:37:26 crc kubenswrapper[4784]: E0123 06:37:26.096437 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs podName:409eb30c-947e-4d15-9b7c-8a73ba35ad70 nodeName:}" failed. No retries permitted until 2026-01-23 06:37:26.596428685 +0000 UTC m=+1049.828936659 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs") pod "openstack-operator-controller-manager-7cccd889d5-jxhkn" (UID: "409eb30c-947e-4d15-9b7c-8a73ba35ad70") : secret "webhook-server-cert" not found Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.143988 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg5mg\" (UniqueName: \"kubernetes.io/projected/409eb30c-947e-4d15-9b7c-8a73ba35ad70-kube-api-access-fg5mg\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.151726 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.668844 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert\") pod \"infra-operator-controller-manager-58749ffdfb-hl8gk\" (UID: \"758913f1-9ef1-4fe9-9d5f-2cb794fcddef\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.668945 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.669000 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p827t\" (UniqueName: \"kubernetes.io/projected/80f7466e-7d6a-4416-9259-c30d69ee725e-kube-api-access-p827t\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5dsxt\" (UID: \"80f7466e-7d6a-4416-9259-c30d69ee725e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5dsxt" Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.669096 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:26 crc kubenswrapper[4784]: E0123 06:37:26.669335 4784 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 06:37:26 crc kubenswrapper[4784]: E0123 06:37:26.669430 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs podName:409eb30c-947e-4d15-9b7c-8a73ba35ad70 nodeName:}" failed. No retries permitted until 2026-01-23 06:37:27.669397885 +0000 UTC m=+1050.901905869 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs") pod "openstack-operator-controller-manager-7cccd889d5-jxhkn" (UID: "409eb30c-947e-4d15-9b7c-8a73ba35ad70") : secret "webhook-server-cert" not found Jan 23 06:37:26 crc kubenswrapper[4784]: E0123 06:37:26.670061 4784 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 06:37:26 crc kubenswrapper[4784]: E0123 06:37:26.670110 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert podName:758913f1-9ef1-4fe9-9d5f-2cb794fcddef nodeName:}" failed. No retries permitted until 2026-01-23 06:37:28.670095191 +0000 UTC m=+1051.902603185 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert") pod "infra-operator-controller-manager-58749ffdfb-hl8gk" (UID: "758913f1-9ef1-4fe9-9d5f-2cb794fcddef") : secret "infra-operator-webhook-server-cert" not found Jan 23 06:37:26 crc kubenswrapper[4784]: E0123 06:37:26.670185 4784 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 06:37:26 crc kubenswrapper[4784]: E0123 06:37:26.670226 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs podName:409eb30c-947e-4d15-9b7c-8a73ba35ad70 nodeName:}" failed. No retries permitted until 2026-01-23 06:37:27.670211074 +0000 UTC m=+1050.902719068 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs") pod "openstack-operator-controller-manager-7cccd889d5-jxhkn" (UID: "409eb30c-947e-4d15-9b7c-8a73ba35ad70") : secret "metrics-server-cert" not found Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.795173 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p827t\" (UniqueName: \"kubernetes.io/projected/80f7466e-7d6a-4416-9259-c30d69ee725e-kube-api-access-p827t\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5dsxt\" (UID: \"80f7466e-7d6a-4416-9259-c30d69ee725e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5dsxt" Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.796958 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" event={"ID":"0e01c35c-c9bd-4b02-adb1-be49a504ea54","Type":"ContainerStarted","Data":"e19bef67a58acdb5452a17e6a46fc429575434a9853b44dd87d2df2a096a27a1"} Jan 23 06:37:26 crc kubenswrapper[4784]: I0123 06:37:26.943930 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk"] Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:26.997262 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb"] Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:27.047116 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7"] Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:27.051280 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5dsxt" Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:27.066371 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn"] Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:27.190289 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6\" (UID: \"500659da-123f-4500-9c50-2b7b3b7656df\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 06:37:27 crc kubenswrapper[4784]: E0123 06:37:27.190716 4784 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:37:27 crc kubenswrapper[4784]: E0123 06:37:27.190836 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert podName:500659da-123f-4500-9c50-2b7b3b7656df nodeName:}" failed. No retries permitted until 2026-01-23 06:37:29.190808236 +0000 UTC m=+1052.423316210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" (UID: "500659da-123f-4500-9c50-2b7b3b7656df") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:27.497728 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn"] Jan 23 06:37:27 crc kubenswrapper[4784]: W0123 06:37:27.507573 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fa12cd4_f2bc_4863_8b67_e246a0becee3.slice/crio-545c040c0d05955c533467bfabbf2b4df07ac44b5647f4f1d535b70ad3748be8 WatchSource:0}: Error finding container 545c040c0d05955c533467bfabbf2b4df07ac44b5647f4f1d535b70ad3748be8: Status 404 returned error can't find the container with id 545c040c0d05955c533467bfabbf2b4df07ac44b5647f4f1d535b70ad3748be8 Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:27.515124 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf"] Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:27.746653 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:27.746779 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:27 crc kubenswrapper[4784]: E0123 06:37:27.747015 4784 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 06:37:27 crc kubenswrapper[4784]: E0123 06:37:27.747109 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs podName:409eb30c-947e-4d15-9b7c-8a73ba35ad70 nodeName:}" failed. No retries permitted until 2026-01-23 06:37:29.747071726 +0000 UTC m=+1052.979579700 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs") pod "openstack-operator-controller-manager-7cccd889d5-jxhkn" (UID: "409eb30c-947e-4d15-9b7c-8a73ba35ad70") : secret "webhook-server-cert" not found Jan 23 06:37:27 crc kubenswrapper[4784]: E0123 06:37:27.747109 4784 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 06:37:27 crc kubenswrapper[4784]: E0123 06:37:27.747200 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs podName:409eb30c-947e-4d15-9b7c-8a73ba35ad70 nodeName:}" failed. No retries permitted until 2026-01-23 06:37:29.747171438 +0000 UTC m=+1052.979679412 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs") pod "openstack-operator-controller-manager-7cccd889d5-jxhkn" (UID: "409eb30c-947e-4d15-9b7c-8a73ba35ad70") : secret "metrics-server-cert" not found Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:27.837262 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w"] Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:27.858120 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" event={"ID":"417f228a-38b7-448a-980d-f64d6e113646","Type":"ContainerStarted","Data":"9306333cf47ab4af13b7040e1acc745901bc172a2647ba92e8fd09045959518a"} Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:27.861420 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" event={"ID":"4fa12cd4-f2bc-4863-8b67-e246a0becee3","Type":"ContainerStarted","Data":"545c040c0d05955c533467bfabbf2b4df07ac44b5647f4f1d535b70ad3748be8"} Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:27.870114 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" event={"ID":"f809f5f2-7409-4d7e-b938-1efc34dc4c2f","Type":"ContainerStarted","Data":"3cbb4a1a5623b9db657d8fed513dd9e7a656cee0e107494af852d90a885fe28c"} Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:27.873540 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" event={"ID":"f54aca80-78ad-4bda-905c-0a519a4f33ed","Type":"ContainerStarted","Data":"45b9bf074f69d508eb3bd8919af42f564034ed516b48fb9ef93fa1243f560e0e"} Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:27.876335 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" event={"ID":"89f228f9-5c69-4e48-bf35-01cc25b56ecd","Type":"ContainerStarted","Data":"3f342b15f0608a2de30b3221bb594bcdccda56328a2323d926d75d97d67e23ba"} Jan 23 06:37:27 crc kubenswrapper[4784]: I0123 06:37:27.882610 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" event={"ID":"55f3492a-a5c0-460b-a93b-eb680b426a7c","Type":"ContainerStarted","Data":"55d3b707b71f82dc989a239e63bf56304c8098660554e941197324cfe5f1247c"} Jan 23 06:37:27 crc kubenswrapper[4784]: W0123 06:37:27.929156 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe79eaa0_8040_4009_9f16_fcb56bffbff7.slice/crio-a106bd453e31f5edd0d734afa63d2c11fda15b1df355bf5eb34d0be820b83176 WatchSource:0}: Error finding container a106bd453e31f5edd0d734afa63d2c11fda15b1df355bf5eb34d0be820b83176: Status 404 returned error can't find the container with id a106bd453e31f5edd0d734afa63d2c11fda15b1df355bf5eb34d0be820b83176 Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.196029 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn"] Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.213517 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt"] Jan 23 06:37:28 crc kubenswrapper[4784]: W0123 06:37:28.660323 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c2a2d81_11ef_4146_ad50_8f7f39163253.slice/crio-2e4e7bbde49320ff93c660a31af06f5bcadafca6f7447db650c7e996873be190 WatchSource:0}: Error finding container 2e4e7bbde49320ff93c660a31af06f5bcadafca6f7447db650c7e996873be190: Status 404 returned error can't find the container with id 2e4e7bbde49320ff93c660a31af06f5bcadafca6f7447db650c7e996873be190 Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.670396 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert\") pod \"infra-operator-controller-manager-58749ffdfb-hl8gk\" (UID: \"758913f1-9ef1-4fe9-9d5f-2cb794fcddef\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 06:37:28 crc kubenswrapper[4784]: E0123 06:37:28.670641 4784 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 06:37:28 crc kubenswrapper[4784]: E0123 06:37:28.670732 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert podName:758913f1-9ef1-4fe9-9d5f-2cb794fcddef nodeName:}" failed. No retries permitted until 2026-01-23 06:37:32.670706401 +0000 UTC m=+1055.903214365 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert") pod "infra-operator-controller-manager-58749ffdfb-hl8gk" (UID: "758913f1-9ef1-4fe9-9d5f-2cb794fcddef") : secret "infra-operator-webhook-server-cert" not found Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.674780 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv"] Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.694183 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-94f28"] Jan 23 06:37:28 crc kubenswrapper[4784]: W0123 06:37:28.695986 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b13bce8_a43d_4833_9472_81f048a95be3.slice/crio-413e80643541d79d8a74b534dfb3e81d02c0286e5509e9edcaf3201e5631add2 WatchSource:0}: Error finding container 413e80643541d79d8a74b534dfb3e81d02c0286e5509e9edcaf3201e5631add2: Status 404 returned error can't find the container with id 413e80643541d79d8a74b534dfb3e81d02c0286e5509e9edcaf3201e5631add2 Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.710646 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7"] Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.746906 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl"] Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.764811 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5"] Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.784383 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5dsxt"] Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.793613 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb"] Jan 23 06:37:28 crc kubenswrapper[4784]: W0123 06:37:28.801769 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e269fdb_0502_4d62_9a0d_15094fdd942c.slice/crio-9cefd78ff09187574c480af467c31246381399035caa9bfbccaa5fbfb0bb669f WatchSource:0}: Error finding container 9cefd78ff09187574c480af467c31246381399035caa9bfbccaa5fbfb0bb669f: Status 404 returned error can't find the container with id 9cefd78ff09187574c480af467c31246381399035caa9bfbccaa5fbfb0bb669f Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.803588 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg"] Jan 23 06:37:28 crc kubenswrapper[4784]: W0123 06:37:28.810364 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cd86a7e_7738_4a67_9c19_d34a70dbc9fe.slice/crio-e0fc37d1242775f27e4f1a21c91dee57abc8e42b3438d787a5b10d2fcb218ece WatchSource:0}: Error finding container e0fc37d1242775f27e4f1a21c91dee57abc8e42b3438d787a5b10d2fcb218ece: Status 404 returned error can't find the container with id e0fc37d1242775f27e4f1a21c91dee57abc8e42b3438d787a5b10d2fcb218ece Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.812495 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt"] Jan 23 06:37:28 crc kubenswrapper[4784]: E0123 06:37:28.814021 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r2pbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-7znp2_openstack-operators(1cd86a7e-7738-4a67-9c19-d34a70dbc9fe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 06:37:28 crc kubenswrapper[4784]: E0123 06:37:28.815289 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" podUID="1cd86a7e-7738-4a67-9c19-d34a70dbc9fe" Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.818910 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2"] Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.897004 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" event={"ID":"e8de3214-d1e9-4800-9ace-51a85b326df8","Type":"ContainerStarted","Data":"b6afa5548d83cb1d26234021550fed2db36226267b2cdfe0350ff517380a1224"} Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.898829 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" event={"ID":"3a006f0b-6298-4509-9533-178b38906875","Type":"ContainerStarted","Data":"f2a3b196883a2b6d82a836ea7588fa758ea3a9d3e42e30c15bf2837edc58141f"} Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.901237 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" event={"ID":"7c5e978b-ac3c-439e-b2b1-ab025c130984","Type":"ContainerStarted","Data":"1087d5b80ff4e49eea2e368467f51a8d2f7d21a136eebd8df4ad1fb7613ffed5"} Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.902898 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" event={"ID":"9bc11b97-7610-4c0f-898a-bb42b42c37d7","Type":"ContainerStarted","Data":"bef9c7fc5da7f536fb3cb4165d9141d9eb2a88ac132836d5bb7e88f2eb1d6154"} Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.904510 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" event={"ID":"1cd86a7e-7738-4a67-9c19-d34a70dbc9fe","Type":"ContainerStarted","Data":"e0fc37d1242775f27e4f1a21c91dee57abc8e42b3438d787a5b10d2fcb218ece"} Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.906731 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" event={"ID":"89a376c8-b238-445d-99da-b85f3c421125","Type":"ContainerStarted","Data":"7d653885c0c06bfc36606ee913f6f4401cf635ce3ecf0f1accfa22ce3787d21a"} Jan 23 06:37:28 crc kubenswrapper[4784]: E0123 06:37:28.907303 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" podUID="1cd86a7e-7738-4a67-9c19-d34a70dbc9fe" Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.910230 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" event={"ID":"3b13bce8-a43d-4833-9472-81f048a95be3","Type":"ContainerStarted","Data":"413e80643541d79d8a74b534dfb3e81d02c0286e5509e9edcaf3201e5631add2"} Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.911825 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" event={"ID":"be79eaa0-8040-4009-9f16-fcb56bffbff7","Type":"ContainerStarted","Data":"a106bd453e31f5edd0d734afa63d2c11fda15b1df355bf5eb34d0be820b83176"} Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.913290 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" event={"ID":"2c2a2d81-11ef-4146-ad50-8f7f39163253","Type":"ContainerStarted","Data":"2e4e7bbde49320ff93c660a31af06f5bcadafca6f7447db650c7e996873be190"} Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.915828 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5dsxt" event={"ID":"80f7466e-7d6a-4416-9259-c30d69ee725e","Type":"ContainerStarted","Data":"5f9691eccfea08d62954172ecf796fa32660518e53751c2959ba68bfe7f177e5"} Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.919161 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" event={"ID":"138e85ae-26a7-45f3-ac25-61ece9cf8573","Type":"ContainerStarted","Data":"0768f7e61eebde6a3eff69b209e3c2f22450444cf2b7de8cf2a27e265f4992ad"} Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.921193 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" event={"ID":"d6c01b10-21b9-4e8b-b051-6f148f468828","Type":"ContainerStarted","Data":"ddf4d223e7d474d5455232be5d92a70a8b1d1691b18097500b3e3cb5fab90005"} Jan 23 06:37:28 crc kubenswrapper[4784]: I0123 06:37:28.923083 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" event={"ID":"2e269fdb-0502-4d62-9a0d-15094fdd942c","Type":"ContainerStarted","Data":"9cefd78ff09187574c480af467c31246381399035caa9bfbccaa5fbfb0bb669f"} Jan 23 06:37:29 crc kubenswrapper[4784]: I0123 06:37:29.281179 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6\" (UID: \"500659da-123f-4500-9c50-2b7b3b7656df\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 06:37:29 crc kubenswrapper[4784]: E0123 06:37:29.281632 4784 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:37:29 crc kubenswrapper[4784]: E0123 06:37:29.281794 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert podName:500659da-123f-4500-9c50-2b7b3b7656df nodeName:}" failed. No retries permitted until 2026-01-23 06:37:33.281741786 +0000 UTC m=+1056.514249760 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" (UID: "500659da-123f-4500-9c50-2b7b3b7656df") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:37:29 crc kubenswrapper[4784]: I0123 06:37:29.791340 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:29 crc kubenswrapper[4784]: I0123 06:37:29.791454 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:29 crc kubenswrapper[4784]: E0123 06:37:29.791585 4784 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 06:37:29 crc kubenswrapper[4784]: E0123 06:37:29.791608 4784 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 06:37:29 crc kubenswrapper[4784]: E0123 06:37:29.791668 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs podName:409eb30c-947e-4d15-9b7c-8a73ba35ad70 nodeName:}" failed. No retries permitted until 2026-01-23 06:37:33.791647166 +0000 UTC m=+1057.024155140 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs") pod "openstack-operator-controller-manager-7cccd889d5-jxhkn" (UID: "409eb30c-947e-4d15-9b7c-8a73ba35ad70") : secret "metrics-server-cert" not found Jan 23 06:37:29 crc kubenswrapper[4784]: E0123 06:37:29.791685 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs podName:409eb30c-947e-4d15-9b7c-8a73ba35ad70 nodeName:}" failed. No retries permitted until 2026-01-23 06:37:33.791679297 +0000 UTC m=+1057.024187271 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs") pod "openstack-operator-controller-manager-7cccd889d5-jxhkn" (UID: "409eb30c-947e-4d15-9b7c-8a73ba35ad70") : secret "webhook-server-cert" not found Jan 23 06:37:29 crc kubenswrapper[4784]: E0123 06:37:29.936620 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" podUID="1cd86a7e-7738-4a67-9c19-d34a70dbc9fe" Jan 23 06:37:32 crc kubenswrapper[4784]: I0123 06:37:32.748842 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert\") pod \"infra-operator-controller-manager-58749ffdfb-hl8gk\" (UID: \"758913f1-9ef1-4fe9-9d5f-2cb794fcddef\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 06:37:32 crc kubenswrapper[4784]: E0123 06:37:32.749526 4784 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 06:37:32 crc kubenswrapper[4784]: E0123 06:37:32.749607 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert podName:758913f1-9ef1-4fe9-9d5f-2cb794fcddef nodeName:}" failed. No retries permitted until 2026-01-23 06:37:40.749580029 +0000 UTC m=+1063.982088003 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert") pod "infra-operator-controller-manager-58749ffdfb-hl8gk" (UID: "758913f1-9ef1-4fe9-9d5f-2cb794fcddef") : secret "infra-operator-webhook-server-cert" not found Jan 23 06:37:33 crc kubenswrapper[4784]: I0123 06:37:33.401740 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6\" (UID: \"500659da-123f-4500-9c50-2b7b3b7656df\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 06:37:33 crc kubenswrapper[4784]: E0123 06:37:33.402439 4784 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:37:33 crc kubenswrapper[4784]: E0123 06:37:33.402539 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert podName:500659da-123f-4500-9c50-2b7b3b7656df nodeName:}" failed. No retries permitted until 2026-01-23 06:37:41.402519664 +0000 UTC m=+1064.635027638 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" (UID: "500659da-123f-4500-9c50-2b7b3b7656df") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:37:33 crc kubenswrapper[4784]: I0123 06:37:33.861813 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:33 crc kubenswrapper[4784]: I0123 06:37:33.861904 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:33 crc kubenswrapper[4784]: E0123 06:37:33.862078 4784 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 06:37:33 crc kubenswrapper[4784]: E0123 06:37:33.862143 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs podName:409eb30c-947e-4d15-9b7c-8a73ba35ad70 nodeName:}" failed. No retries permitted until 2026-01-23 06:37:41.862122427 +0000 UTC m=+1065.094630401 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs") pod "openstack-operator-controller-manager-7cccd889d5-jxhkn" (UID: "409eb30c-947e-4d15-9b7c-8a73ba35ad70") : secret "webhook-server-cert" not found Jan 23 06:37:33 crc kubenswrapper[4784]: E0123 06:37:33.862549 4784 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 06:37:33 crc kubenswrapper[4784]: E0123 06:37:33.862576 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs podName:409eb30c-947e-4d15-9b7c-8a73ba35ad70 nodeName:}" failed. No retries permitted until 2026-01-23 06:37:41.862568808 +0000 UTC m=+1065.095076782 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs") pod "openstack-operator-controller-manager-7cccd889d5-jxhkn" (UID: "409eb30c-947e-4d15-9b7c-8a73ba35ad70") : secret "metrics-server-cert" not found Jan 23 06:37:40 crc kubenswrapper[4784]: I0123 06:37:40.758096 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert\") pod \"infra-operator-controller-manager-58749ffdfb-hl8gk\" (UID: \"758913f1-9ef1-4fe9-9d5f-2cb794fcddef\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 06:37:40 crc kubenswrapper[4784]: I0123 06:37:40.773406 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/758913f1-9ef1-4fe9-9d5f-2cb794fcddef-cert\") pod \"infra-operator-controller-manager-58749ffdfb-hl8gk\" (UID: \"758913f1-9ef1-4fe9-9d5f-2cb794fcddef\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 06:37:40 crc kubenswrapper[4784]: I0123 06:37:40.917297 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 06:37:41 crc kubenswrapper[4784]: I0123 06:37:41.471726 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6\" (UID: \"500659da-123f-4500-9c50-2b7b3b7656df\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 06:37:41 crc kubenswrapper[4784]: I0123 06:37:41.498214 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/500659da-123f-4500-9c50-2b7b3b7656df-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6\" (UID: \"500659da-123f-4500-9c50-2b7b3b7656df\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 06:37:41 crc kubenswrapper[4784]: I0123 06:37:41.593494 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 06:37:41 crc kubenswrapper[4784]: I0123 06:37:41.883476 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:41 crc kubenswrapper[4784]: I0123 06:37:41.883596 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:41 crc kubenswrapper[4784]: E0123 06:37:41.883631 4784 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 06:37:41 crc kubenswrapper[4784]: E0123 06:37:41.883716 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs podName:409eb30c-947e-4d15-9b7c-8a73ba35ad70 nodeName:}" failed. No retries permitted until 2026-01-23 06:37:57.883689718 +0000 UTC m=+1081.116197692 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs") pod "openstack-operator-controller-manager-7cccd889d5-jxhkn" (UID: "409eb30c-947e-4d15-9b7c-8a73ba35ad70") : secret "webhook-server-cert" not found Jan 23 06:37:41 crc kubenswrapper[4784]: I0123 06:37:41.896274 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-metrics-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:41 crc kubenswrapper[4784]: E0123 06:37:41.965453 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd" Jan 23 06:37:41 crc kubenswrapper[4784]: E0123 06:37:41.965710 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h7ns7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7f86f8796f-q7sn8_openstack-operators(0e01c35c-c9bd-4b02-adb1-be49a504ea54): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:37:41 crc kubenswrapper[4784]: E0123 06:37:41.967927 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" podUID="0e01c35c-c9bd-4b02-adb1-be49a504ea54" Jan 23 06:37:42 crc kubenswrapper[4784]: E0123 06:37:42.211167 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" podUID="0e01c35c-c9bd-4b02-adb1-be49a504ea54" Jan 23 06:37:44 crc kubenswrapper[4784]: E0123 06:37:44.982355 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 23 06:37:44 crc kubenswrapper[4784]: E0123 06:37:44.982849 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2f5dj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-hcqtn_openstack-operators(417f228a-38b7-448a-980d-f64d6e113646): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:37:44 crc kubenswrapper[4784]: E0123 06:37:44.984029 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" podUID="417f228a-38b7-448a-980d-f64d6e113646" Jan 23 06:37:45 crc kubenswrapper[4784]: E0123 06:37:45.256726 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" podUID="417f228a-38b7-448a-980d-f64d6e113646" Jan 23 06:37:46 crc kubenswrapper[4784]: E0123 06:37:46.173967 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0" Jan 23 06:37:46 crc kubenswrapper[4784]: E0123 06:37:46.174613 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8zmpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-c2btv_openstack-operators(2c2a2d81-11ef-4146-ad50-8f7f39163253): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:37:46 crc kubenswrapper[4784]: E0123 06:37:46.175898 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" podUID="2c2a2d81-11ef-4146-ad50-8f7f39163253" Jan 23 06:37:46 crc kubenswrapper[4784]: E0123 06:37:46.263322 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" podUID="2c2a2d81-11ef-4146-ad50-8f7f39163253" Jan 23 06:37:47 crc kubenswrapper[4784]: E0123 06:37:47.232972 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f" Jan 23 06:37:47 crc kubenswrapper[4784]: E0123 06:37:47.233283 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhsws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-69cf5d4557-kl6d5_openstack-operators(7c5e978b-ac3c-439e-b2b1-ab025c130984): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:37:47 crc kubenswrapper[4784]: E0123 06:37:47.236016 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" podUID="7c5e978b-ac3c-439e-b2b1-ab025c130984" Jan 23 06:37:47 crc kubenswrapper[4784]: E0123 06:37:47.271912 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" podUID="7c5e978b-ac3c-439e-b2b1-ab025c130984" Jan 23 06:37:49 crc kubenswrapper[4784]: E0123 06:37:49.230031 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 23 06:37:49 crc kubenswrapper[4784]: E0123 06:37:49.230262 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6t5hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-c2zh7_openstack-operators(3a006f0b-6298-4509-9533-178b38906875): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:37:49 crc kubenswrapper[4784]: E0123 06:37:49.231439 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" podUID="3a006f0b-6298-4509-9533-178b38906875" Jan 23 06:37:49 crc kubenswrapper[4784]: E0123 06:37:49.286006 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" podUID="3a006f0b-6298-4509-9533-178b38906875" Jan 23 06:37:50 crc kubenswrapper[4784]: E0123 06:37:50.802334 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 23 06:37:50 crc kubenswrapper[4784]: E0123 06:37:50.803166 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pngl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-2vptn_openstack-operators(89f228f9-5c69-4e48-bf35-01cc25b56ecd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:37:50 crc kubenswrapper[4784]: E0123 06:37:50.804461 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" podUID="89f228f9-5c69-4e48-bf35-01cc25b56ecd" Jan 23 06:37:51 crc kubenswrapper[4784]: E0123 06:37:51.305116 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" podUID="89f228f9-5c69-4e48-bf35-01cc25b56ecd" Jan 23 06:37:52 crc kubenswrapper[4784]: E0123 06:37:52.049529 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 23 06:37:52 crc kubenswrapper[4784]: E0123 06:37:52.049895 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dcqjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-nb6tb_openstack-operators(f54aca80-78ad-4bda-905c-0a519a4f33ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:37:52 crc kubenswrapper[4784]: E0123 06:37:52.051159 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" podUID="f54aca80-78ad-4bda-905c-0a519a4f33ed" Jan 23 06:37:52 crc kubenswrapper[4784]: E0123 06:37:52.314899 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" podUID="f54aca80-78ad-4bda-905c-0a519a4f33ed" Jan 23 06:37:53 crc kubenswrapper[4784]: E0123 06:37:53.365990 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831" Jan 23 06:37:53 crc kubenswrapper[4784]: E0123 06:37:53.367387 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jkqrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-82hzn_openstack-operators(9bc11b97-7610-4c0f-898a-bb42b42c37d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:37:53 crc kubenswrapper[4784]: E0123 06:37:53.368685 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" podUID="9bc11b97-7610-4c0f-898a-bb42b42c37d7" Jan 23 06:37:54 crc kubenswrapper[4784]: E0123 06:37:54.063212 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 23 06:37:54 crc kubenswrapper[4784]: E0123 06:37:54.063721 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xqp78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-zkswk_openstack-operators(55f3492a-a5c0-460b-a93b-eb680b426a7c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:37:54 crc kubenswrapper[4784]: E0123 06:37:54.065687 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" podUID="55f3492a-a5c0-460b-a93b-eb680b426a7c" Jan 23 06:37:54 crc kubenswrapper[4784]: E0123 06:37:54.350078 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" podUID="9bc11b97-7610-4c0f-898a-bb42b42c37d7" Jan 23 06:37:54 crc kubenswrapper[4784]: E0123 06:37:54.351735 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" podUID="55f3492a-a5c0-460b-a93b-eb680b426a7c" Jan 23 06:37:55 crc kubenswrapper[4784]: E0123 06:37:55.450162 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 23 06:37:55 crc kubenswrapper[4784]: E0123 06:37:55.450482 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m5sz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-wzjzl_openstack-operators(138e85ae-26a7-45f3-ac25-61ece9cf8573): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:37:55 crc kubenswrapper[4784]: E0123 06:37:55.451631 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" podUID="138e85ae-26a7-45f3-ac25-61ece9cf8573" Jan 23 06:37:56 crc kubenswrapper[4784]: E0123 06:37:56.272044 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 23 06:37:56 crc kubenswrapper[4784]: E0123 06:37:56.272609 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9m8tj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-94f28_openstack-operators(3b13bce8-a43d-4833-9472-81f048a95be3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:37:56 crc kubenswrapper[4784]: E0123 06:37:56.273866 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" podUID="3b13bce8-a43d-4833-9472-81f048a95be3" Jan 23 06:37:56 crc kubenswrapper[4784]: E0123 06:37:56.367186 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" podUID="138e85ae-26a7-45f3-ac25-61ece9cf8573" Jan 23 06:37:56 crc kubenswrapper[4784]: E0123 06:37:56.368332 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" podUID="3b13bce8-a43d-4833-9472-81f048a95be3" Jan 23 06:37:57 crc kubenswrapper[4784]: E0123 06:37:57.003519 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 23 06:37:57 crc kubenswrapper[4784]: E0123 06:37:57.003811 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n7r79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7_openstack-operators(f809f5f2-7409-4d7e-b938-1efc34dc4c2f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:37:57 crc kubenswrapper[4784]: E0123 06:37:57.005379 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" podUID="f809f5f2-7409-4d7e-b938-1efc34dc4c2f" Jan 23 06:37:57 crc kubenswrapper[4784]: E0123 06:37:57.376621 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" podUID="f809f5f2-7409-4d7e-b938-1efc34dc4c2f" Jan 23 06:37:57 crc kubenswrapper[4784]: I0123 06:37:57.912151 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:57 crc kubenswrapper[4784]: I0123 06:37:57.925332 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/409eb30c-947e-4d15-9b7c-8a73ba35ad70-webhook-certs\") pod \"openstack-operator-controller-manager-7cccd889d5-jxhkn\" (UID: \"409eb30c-947e-4d15-9b7c-8a73ba35ad70\") " pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:58 crc kubenswrapper[4784]: I0123 06:37:58.173227 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9kq6t" Jan 23 06:37:58 crc kubenswrapper[4784]: I0123 06:37:58.181783 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:37:59 crc kubenswrapper[4784]: E0123 06:37:59.221547 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 23 06:37:59 crc kubenswrapper[4784]: E0123 06:37:59.221848 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p827t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-5dsxt_openstack-operators(80f7466e-7d6a-4416-9259-c30d69ee725e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:37:59 crc kubenswrapper[4784]: E0123 06:37:59.223084 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5dsxt" podUID="80f7466e-7d6a-4416-9259-c30d69ee725e" Jan 23 06:37:59 crc kubenswrapper[4784]: E0123 06:37:59.392730 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5dsxt" podUID="80f7466e-7d6a-4416-9259-c30d69ee725e" Jan 23 06:38:07 crc kubenswrapper[4784]: E0123 06:38:07.589488 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 23 06:38:07 crc kubenswrapper[4784]: E0123 06:38:07.590673 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dkf55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-2wrsg_openstack-operators(e8de3214-d1e9-4800-9ace-51a85b326df8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:38:07 crc kubenswrapper[4784]: E0123 06:38:07.592500 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" podUID="e8de3214-d1e9-4800-9ace-51a85b326df8" Jan 23 06:38:09 crc kubenswrapper[4784]: E0123 06:38:09.374984 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" podUID="e8de3214-d1e9-4800-9ace-51a85b326df8" Jan 23 06:38:12 crc kubenswrapper[4784]: E0123 06:38:12.123867 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 23 06:38:12 crc kubenswrapper[4784]: E0123 06:38:12.125030 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r2pbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-7znp2_openstack-operators(1cd86a7e-7738-4a67-9c19-d34a70dbc9fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:38:12 crc kubenswrapper[4784]: E0123 06:38:12.126410 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" podUID="1cd86a7e-7738-4a67-9c19-d34a70dbc9fe" Jan 23 06:38:12 crc kubenswrapper[4784]: I0123 06:38:12.969961 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6"] Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.046854 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn"] Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.076218 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk"] Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.582550 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" event={"ID":"d6c01b10-21b9-4e8b-b051-6f148f468828","Type":"ContainerStarted","Data":"51a246ecb1c8341ee417386a01de941615ba8b8871c70bd1e1ddedba0b5957e6"} Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.583783 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.584727 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" event={"ID":"7c5e978b-ac3c-439e-b2b1-ab025c130984","Type":"ContainerStarted","Data":"3a967834cd9047ed7272022dd4d7fd1a01c5ba480bc1135d15d7d2c34db154d3"} Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.585128 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.620737 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" event={"ID":"9bc11b97-7610-4c0f-898a-bb42b42c37d7","Type":"ContainerStarted","Data":"a30829bfcc203f2f5d8700e2a1835d1a049a72e098ca487ed3379131f7e283f8"} Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.621371 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.627662 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" event={"ID":"3b13bce8-a43d-4833-9472-81f048a95be3","Type":"ContainerStarted","Data":"47e967c4505e5f77dc00c441db0a43b7965e7277571396e99e67a50335b2212a"} Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.627920 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.636565 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" event={"ID":"f809f5f2-7409-4d7e-b938-1efc34dc4c2f","Type":"ContainerStarted","Data":"40921e91bec82063e3258fabd6962b4c9c841fff38d04a83665a3089915ee6ac"} Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.637027 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.637991 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" event={"ID":"f54aca80-78ad-4bda-905c-0a519a4f33ed","Type":"ContainerStarted","Data":"e6e8c4861c90543a850ebe93c67b64f50be474672f17a8e932a763a05aafc7fe"} Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.638347 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.639047 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" event={"ID":"758913f1-9ef1-4fe9-9d5f-2cb794fcddef","Type":"ContainerStarted","Data":"c55cce4478710c81edfb5393049cf9332f9f2ca151fa5bff4b56f3cdb3023dc9"} Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.640101 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" event={"ID":"409eb30c-947e-4d15-9b7c-8a73ba35ad70","Type":"ContainerStarted","Data":"d3e97731dcf5cc0c62013ae210e094fde1189f36a1101d04cd8dcc9387aae523"} Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.641062 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" event={"ID":"2e269fdb-0502-4d62-9a0d-15094fdd942c","Type":"ContainerStarted","Data":"e26ed587ce5e460c34b86e7bc1b7c483ac8dd85729342b00c27b2af5c30c783f"} Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.641486 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.642349 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" event={"ID":"55f3492a-a5c0-460b-a93b-eb680b426a7c","Type":"ContainerStarted","Data":"54b37f324336b7ac491e75c944c2d2a1b1aeae29cacb998104697892098a73b1"} Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.642706 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.643567 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" event={"ID":"be79eaa0-8040-4009-9f16-fcb56bffbff7","Type":"ContainerStarted","Data":"4fa2741719fe97418ce26b948bb1ade7fae773db04484e5c7a5da4e8d164b214"} Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.644024 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.644882 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" event={"ID":"138e85ae-26a7-45f3-ac25-61ece9cf8573","Type":"ContainerStarted","Data":"fbbbdec92bb9a9df5c13215ed78f06ccb5173a53d50f37653faf4b0afd9c7104"} Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.645202 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.656507 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" event={"ID":"3a006f0b-6298-4509-9533-178b38906875","Type":"ContainerStarted","Data":"72b0b8f30d9bec96529424e5f4f5fa35573a8e83f1ef99cd635f8e254ad11030"} Jan 23 06:38:13 crc kubenswrapper[4784]: I0123 06:38:13.657233 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:13.881160 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" event={"ID":"89a376c8-b238-445d-99da-b85f3c421125","Type":"ContainerStarted","Data":"231a51d922328c63d8bc60339ff99a10fea2fe1633661ddd1c9a5790fba2865d"} Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:13.882616 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:13.892216 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" event={"ID":"2c2a2d81-11ef-4146-ad50-8f7f39163253","Type":"ContainerStarted","Data":"63476a1e46ed7d85893d95bd90150350f90fe52a45e93b52573e5f9647778b97"} Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:13.893270 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:13.894432 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" event={"ID":"4fa12cd4-f2bc-4863-8b67-e246a0becee3","Type":"ContainerStarted","Data":"7139c56d52233ed27f622be20e5ff8eadeff94c5d1740b50a0eed343b6837d4b"} Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:13.894873 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:13.895592 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" event={"ID":"500659da-123f-4500-9c50-2b7b3b7656df","Type":"ContainerStarted","Data":"051250442bb30c83cec0fb7a0ec260b94abf16923031eed1bc7f8a8b20d4b229"} Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:13.901440 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" event={"ID":"89f228f9-5c69-4e48-bf35-01cc25b56ecd","Type":"ContainerStarted","Data":"4e60b62e73612f0b4b0f3e797e97f96da637371640a2e888d58c03fa29334e40"} Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:13.902166 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:13.903644 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" event={"ID":"0e01c35c-c9bd-4b02-adb1-be49a504ea54","Type":"ContainerStarted","Data":"c42fb7642df77acbb56e5bf38601b38c957c672b5960dbdaa7e280802952af9d"} Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:13.903926 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:13.918861 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" event={"ID":"417f228a-38b7-448a-980d-f64d6e113646","Type":"ContainerStarted","Data":"82a2acd77c9163fe773d7977bc7ceca847ea4aa15204dcde00d5bd41f97bd9b2"} Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:13.919687 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:14.163637 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" podStartSLOduration=6.546097146 podStartE2EDuration="49.163606916s" podCreationTimestamp="2026-01-23 06:37:25 +0000 UTC" firstStartedPulling="2026-01-23 06:37:28.753832214 +0000 UTC m=+1051.986340188" lastFinishedPulling="2026-01-23 06:38:11.371341944 +0000 UTC m=+1094.603849958" observedRunningTime="2026-01-23 06:38:14.140203191 +0000 UTC m=+1097.372711165" watchObservedRunningTime="2026-01-23 06:38:14.163606916 +0000 UTC m=+1097.396114890" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:14.228470 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" podStartSLOduration=5.05848199 podStartE2EDuration="50.228440909s" podCreationTimestamp="2026-01-23 06:37:24 +0000 UTC" firstStartedPulling="2026-01-23 06:37:27.069941737 +0000 UTC m=+1050.302449711" lastFinishedPulling="2026-01-23 06:38:12.239900616 +0000 UTC m=+1095.472408630" observedRunningTime="2026-01-23 06:38:14.224044831 +0000 UTC m=+1097.456552805" watchObservedRunningTime="2026-01-23 06:38:14.228440909 +0000 UTC m=+1097.460948883" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:14.280266 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" podStartSLOduration=5.487757148 podStartE2EDuration="50.280245052s" podCreationTimestamp="2026-01-23 06:37:24 +0000 UTC" firstStartedPulling="2026-01-23 06:37:27.437522409 +0000 UTC m=+1050.670030383" lastFinishedPulling="2026-01-23 06:38:12.230010273 +0000 UTC m=+1095.462518287" observedRunningTime="2026-01-23 06:38:14.276121421 +0000 UTC m=+1097.508629395" watchObservedRunningTime="2026-01-23 06:38:14.280245052 +0000 UTC m=+1097.512753026" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:14.322307 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" podStartSLOduration=6.88333336 podStartE2EDuration="50.322283185s" podCreationTimestamp="2026-01-23 06:37:24 +0000 UTC" firstStartedPulling="2026-01-23 06:37:27.932519402 +0000 UTC m=+1051.165027376" lastFinishedPulling="2026-01-23 06:38:11.371469187 +0000 UTC m=+1094.603977201" observedRunningTime="2026-01-23 06:38:14.318732128 +0000 UTC m=+1097.551240102" watchObservedRunningTime="2026-01-23 06:38:14.322283185 +0000 UTC m=+1097.554791159" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:14.348784 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" podStartSLOduration=6.586456507 podStartE2EDuration="50.348762176s" podCreationTimestamp="2026-01-23 06:37:24 +0000 UTC" firstStartedPulling="2026-01-23 06:37:28.752601974 +0000 UTC m=+1051.985109948" lastFinishedPulling="2026-01-23 06:38:12.514907603 +0000 UTC m=+1095.747415617" observedRunningTime="2026-01-23 06:38:14.346931071 +0000 UTC m=+1097.579439045" watchObservedRunningTime="2026-01-23 06:38:14.348762176 +0000 UTC m=+1097.581270150" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:14.414026 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" podStartSLOduration=4.223714058 podStartE2EDuration="50.414001949s" podCreationTimestamp="2026-01-23 06:37:24 +0000 UTC" firstStartedPulling="2026-01-23 06:37:26.051892981 +0000 UTC m=+1049.284400955" lastFinishedPulling="2026-01-23 06:38:12.242180852 +0000 UTC m=+1095.474688846" observedRunningTime="2026-01-23 06:38:14.408709449 +0000 UTC m=+1097.641217423" watchObservedRunningTime="2026-01-23 06:38:14.414001949 +0000 UTC m=+1097.646509923" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:14.482562 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" podStartSLOduration=5.457512668 podStartE2EDuration="49.482543044s" podCreationTimestamp="2026-01-23 06:37:25 +0000 UTC" firstStartedPulling="2026-01-23 06:37:28.290600261 +0000 UTC m=+1051.523108235" lastFinishedPulling="2026-01-23 06:38:12.315630617 +0000 UTC m=+1095.548138611" observedRunningTime="2026-01-23 06:38:14.480474972 +0000 UTC m=+1097.712982956" watchObservedRunningTime="2026-01-23 06:38:14.482543044 +0000 UTC m=+1097.715051018" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:14.513660 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" podStartSLOduration=6.488322837 podStartE2EDuration="49.513632157s" podCreationTimestamp="2026-01-23 06:37:25 +0000 UTC" firstStartedPulling="2026-01-23 06:37:28.301850198 +0000 UTC m=+1051.534358162" lastFinishedPulling="2026-01-23 06:38:11.327159508 +0000 UTC m=+1094.559667482" observedRunningTime="2026-01-23 06:38:14.513030372 +0000 UTC m=+1097.745538346" watchObservedRunningTime="2026-01-23 06:38:14.513632157 +0000 UTC m=+1097.746140131" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:14.620834 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" podStartSLOduration=7.073023235 podStartE2EDuration="49.620814641s" podCreationTimestamp="2026-01-23 06:37:25 +0000 UTC" firstStartedPulling="2026-01-23 06:37:28.807459592 +0000 UTC m=+1052.039967566" lastFinishedPulling="2026-01-23 06:38:11.355250968 +0000 UTC m=+1094.587758972" observedRunningTime="2026-01-23 06:38:14.614855274 +0000 UTC m=+1097.847363248" watchObservedRunningTime="2026-01-23 06:38:14.620814641 +0000 UTC m=+1097.853322625" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:14.874494 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" podStartSLOduration=6.359341028 podStartE2EDuration="49.874470424s" podCreationTimestamp="2026-01-23 06:37:25 +0000 UTC" firstStartedPulling="2026-01-23 06:37:28.714947759 +0000 UTC m=+1051.947455733" lastFinishedPulling="2026-01-23 06:38:12.230077135 +0000 UTC m=+1095.462585129" observedRunningTime="2026-01-23 06:38:14.823263356 +0000 UTC m=+1098.055771330" watchObservedRunningTime="2026-01-23 06:38:14.874470424 +0000 UTC m=+1098.106978398" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:14.908133 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" podStartSLOduration=6.030513426 podStartE2EDuration="50.908106061s" podCreationTimestamp="2026-01-23 06:37:24 +0000 UTC" firstStartedPulling="2026-01-23 06:37:27.438035772 +0000 UTC m=+1050.670543746" lastFinishedPulling="2026-01-23 06:38:12.315628367 +0000 UTC m=+1095.548136381" observedRunningTime="2026-01-23 06:38:14.905344803 +0000 UTC m=+1098.137852767" watchObservedRunningTime="2026-01-23 06:38:14.908106061 +0000 UTC m=+1098.140614025" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:14.911375 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" podStartSLOduration=7.428941699 podStartE2EDuration="50.911368651s" podCreationTimestamp="2026-01-23 06:37:24 +0000 UTC" firstStartedPulling="2026-01-23 06:37:28.752568853 +0000 UTC m=+1051.985076827" lastFinishedPulling="2026-01-23 06:38:12.234995545 +0000 UTC m=+1095.467503779" observedRunningTime="2026-01-23 06:38:14.876964585 +0000 UTC m=+1098.109472549" watchObservedRunningTime="2026-01-23 06:38:14.911368651 +0000 UTC m=+1098.143876625" Jan 23 06:38:14 crc kubenswrapper[4784]: I0123 06:38:14.948587 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" podStartSLOduration=6.380707523 podStartE2EDuration="49.948560284s" podCreationTimestamp="2026-01-23 06:37:25 +0000 UTC" firstStartedPulling="2026-01-23 06:37:28.665485613 +0000 UTC m=+1051.897993587" lastFinishedPulling="2026-01-23 06:38:12.233338344 +0000 UTC m=+1095.465846348" observedRunningTime="2026-01-23 06:38:14.9467591 +0000 UTC m=+1098.179267074" watchObservedRunningTime="2026-01-23 06:38:14.948560284 +0000 UTC m=+1098.181068258" Jan 23 06:38:15 crc kubenswrapper[4784]: I0123 06:38:15.062242 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" event={"ID":"409eb30c-947e-4d15-9b7c-8a73ba35ad70","Type":"ContainerStarted","Data":"0e285391e7b5121c93ace6e1df4cade3a5671af276b059c3be23471db1ac82d1"} Jan 23 06:38:15 crc kubenswrapper[4784]: I0123 06:38:15.062313 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:38:15 crc kubenswrapper[4784]: I0123 06:38:15.163694 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" podStartSLOduration=7.339431089 podStartE2EDuration="51.16366727s" podCreationTimestamp="2026-01-23 06:37:24 +0000 UTC" firstStartedPulling="2026-01-23 06:37:27.53073548 +0000 UTC m=+1050.763243454" lastFinishedPulling="2026-01-23 06:38:11.354971661 +0000 UTC m=+1094.587479635" observedRunningTime="2026-01-23 06:38:15.068118003 +0000 UTC m=+1098.300625977" watchObservedRunningTime="2026-01-23 06:38:15.16366727 +0000 UTC m=+1098.396175244" Jan 23 06:38:15 crc kubenswrapper[4784]: I0123 06:38:15.164076 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" podStartSLOduration=6.430099955 podStartE2EDuration="51.16406971s" podCreationTimestamp="2026-01-23 06:37:24 +0000 UTC" firstStartedPulling="2026-01-23 06:37:27.540167792 +0000 UTC m=+1050.772675766" lastFinishedPulling="2026-01-23 06:38:12.274137537 +0000 UTC m=+1095.506645521" observedRunningTime="2026-01-23 06:38:15.161904247 +0000 UTC m=+1098.394412211" watchObservedRunningTime="2026-01-23 06:38:15.16406971 +0000 UTC m=+1098.396577684" Jan 23 06:38:15 crc kubenswrapper[4784]: I0123 06:38:15.349925 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" podStartSLOduration=6.053547941 podStartE2EDuration="51.349896886s" podCreationTimestamp="2026-01-23 06:37:24 +0000 UTC" firstStartedPulling="2026-01-23 06:37:27.047380242 +0000 UTC m=+1050.279888216" lastFinishedPulling="2026-01-23 06:38:12.343729177 +0000 UTC m=+1095.576237161" observedRunningTime="2026-01-23 06:38:15.32320909 +0000 UTC m=+1098.555717064" watchObservedRunningTime="2026-01-23 06:38:15.349896886 +0000 UTC m=+1098.582404860" Jan 23 06:38:15 crc kubenswrapper[4784]: I0123 06:38:15.423396 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" podStartSLOduration=6.773696451 podStartE2EDuration="50.423363322s" podCreationTimestamp="2026-01-23 06:37:25 +0000 UTC" firstStartedPulling="2026-01-23 06:37:28.698051694 +0000 UTC m=+1051.930559668" lastFinishedPulling="2026-01-23 06:38:12.347718555 +0000 UTC m=+1095.580226539" observedRunningTime="2026-01-23 06:38:15.418558253 +0000 UTC m=+1098.651066227" watchObservedRunningTime="2026-01-23 06:38:15.423363322 +0000 UTC m=+1098.655871296" Jan 23 06:38:15 crc kubenswrapper[4784]: I0123 06:38:15.491460 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" podStartSLOduration=50.491436254 podStartE2EDuration="50.491436254s" podCreationTimestamp="2026-01-23 06:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:38:15.458596128 +0000 UTC m=+1098.691104092" watchObservedRunningTime="2026-01-23 06:38:15.491436254 +0000 UTC m=+1098.723944238" Jan 23 06:38:16 crc kubenswrapper[4784]: I0123 06:38:16.072618 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5dsxt" event={"ID":"80f7466e-7d6a-4416-9259-c30d69ee725e","Type":"ContainerStarted","Data":"bf524284fcbb2f0d69563d484d7060081a17dfdda94c642ca7132ebd30698d21"} Jan 23 06:38:17 crc kubenswrapper[4784]: I0123 06:38:17.295040 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5dsxt" podStartSLOduration=6.230094631 podStartE2EDuration="52.295021412s" podCreationTimestamp="2026-01-23 06:37:25 +0000 UTC" firstStartedPulling="2026-01-23 06:37:28.793051808 +0000 UTC m=+1052.025559782" lastFinishedPulling="2026-01-23 06:38:14.857978589 +0000 UTC m=+1098.090486563" observedRunningTime="2026-01-23 06:38:16.106146909 +0000 UTC m=+1099.338654883" watchObservedRunningTime="2026-01-23 06:38:17.295021412 +0000 UTC m=+1100.527529386" Jan 23 06:38:18 crc kubenswrapper[4784]: I0123 06:38:18.200058 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 06:38:19 crc kubenswrapper[4784]: I0123 06:38:19.129128 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" event={"ID":"500659da-123f-4500-9c50-2b7b3b7656df","Type":"ContainerStarted","Data":"8c6e2f30c1086bdc04717bd26328e8f82831d5959fc07d4565ef0b555f515224"} Jan 23 06:38:19 crc kubenswrapper[4784]: I0123 06:38:19.131003 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 06:38:19 crc kubenswrapper[4784]: I0123 06:38:19.132757 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" event={"ID":"758913f1-9ef1-4fe9-9d5f-2cb794fcddef","Type":"ContainerStarted","Data":"6bc63c5a95df5062eef621931ad51b9f861a1cdf4cc0a8573bcd603f12ba4f88"} Jan 23 06:38:19 crc kubenswrapper[4784]: I0123 06:38:19.133952 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 06:38:19 crc kubenswrapper[4784]: I0123 06:38:19.164331 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" podStartSLOduration=48.619539176 podStartE2EDuration="54.164305385s" podCreationTimestamp="2026-01-23 06:37:25 +0000 UTC" firstStartedPulling="2026-01-23 06:38:13.042309363 +0000 UTC m=+1096.274817327" lastFinishedPulling="2026-01-23 06:38:18.587075562 +0000 UTC m=+1101.819583536" observedRunningTime="2026-01-23 06:38:19.162427119 +0000 UTC m=+1102.394935113" watchObservedRunningTime="2026-01-23 06:38:19.164305385 +0000 UTC m=+1102.396813359" Jan 23 06:38:19 crc kubenswrapper[4784]: I0123 06:38:19.200980 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" podStartSLOduration=49.758609854 podStartE2EDuration="55.200924646s" podCreationTimestamp="2026-01-23 06:37:24 +0000 UTC" firstStartedPulling="2026-01-23 06:38:13.127287241 +0000 UTC m=+1096.359795215" lastFinishedPulling="2026-01-23 06:38:18.569602033 +0000 UTC m=+1101.802110007" observedRunningTime="2026-01-23 06:38:19.193868193 +0000 UTC m=+1102.426376177" watchObservedRunningTime="2026-01-23 06:38:19.200924646 +0000 UTC m=+1102.433432650" Jan 23 06:38:23 crc kubenswrapper[4784]: I0123 06:38:23.169296 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" event={"ID":"e8de3214-d1e9-4800-9ace-51a85b326df8","Type":"ContainerStarted","Data":"f838fee77ce3b52571a7392e8983db4d7e7fccab5228acbdaf463e00dddec160"} Jan 23 06:38:23 crc kubenswrapper[4784]: I0123 06:38:23.170534 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" Jan 23 06:38:23 crc kubenswrapper[4784]: I0123 06:38:23.194291 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" podStartSLOduration=4.242911374 podStartE2EDuration="58.194269135s" podCreationTimestamp="2026-01-23 06:37:25 +0000 UTC" firstStartedPulling="2026-01-23 06:37:28.793462938 +0000 UTC m=+1052.025970912" lastFinishedPulling="2026-01-23 06:38:22.744820659 +0000 UTC m=+1105.977328673" observedRunningTime="2026-01-23 06:38:23.190637245 +0000 UTC m=+1106.423145219" watchObservedRunningTime="2026-01-23 06:38:23.194269135 +0000 UTC m=+1106.426777119" Jan 23 06:38:25 crc kubenswrapper[4784]: I0123 06:38:25.137651 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" Jan 23 06:38:25 crc kubenswrapper[4784]: I0123 06:38:25.193372 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" Jan 23 06:38:25 crc kubenswrapper[4784]: I0123 06:38:25.201901 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" Jan 23 06:38:25 crc kubenswrapper[4784]: I0123 06:38:25.227260 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" Jan 23 06:38:25 crc kubenswrapper[4784]: I0123 06:38:25.273542 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" Jan 23 06:38:25 crc kubenswrapper[4784]: I0123 06:38:25.294362 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" Jan 23 06:38:25 crc kubenswrapper[4784]: I0123 06:38:25.355229 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" Jan 23 06:38:25 crc kubenswrapper[4784]: I0123 06:38:25.423724 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" Jan 23 06:38:25 crc kubenswrapper[4784]: I0123 06:38:25.488510 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" Jan 23 06:38:25 crc kubenswrapper[4784]: I0123 06:38:25.608088 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" Jan 23 06:38:25 crc kubenswrapper[4784]: I0123 06:38:25.662598 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" Jan 23 06:38:25 crc kubenswrapper[4784]: I0123 06:38:25.682608 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" Jan 23 06:38:25 crc kubenswrapper[4784]: I0123 06:38:25.763202 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" Jan 23 06:38:25 crc kubenswrapper[4784]: I0123 06:38:25.833528 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" Jan 23 06:38:25 crc kubenswrapper[4784]: I0123 06:38:25.949587 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" Jan 23 06:38:26 crc kubenswrapper[4784]: I0123 06:38:26.058280 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" Jan 23 06:38:26 crc kubenswrapper[4784]: I0123 06:38:26.180891 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" Jan 23 06:38:27 crc kubenswrapper[4784]: E0123 06:38:27.264852 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" podUID="1cd86a7e-7738-4a67-9c19-d34a70dbc9fe" Jan 23 06:38:30 crc kubenswrapper[4784]: I0123 06:38:30.928057 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 06:38:31 crc kubenswrapper[4784]: I0123 06:38:31.604579 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 06:38:35 crc kubenswrapper[4784]: I0123 06:38:35.722477 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" Jan 23 06:38:43 crc kubenswrapper[4784]: I0123 06:38:43.384460 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" event={"ID":"1cd86a7e-7738-4a67-9c19-d34a70dbc9fe","Type":"ContainerStarted","Data":"010578990e3d232d1556d179c8a0b01827db5788554a75996915119465720917"} Jan 23 06:38:43 crc kubenswrapper[4784]: I0123 06:38:43.385577 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" Jan 23 06:38:43 crc kubenswrapper[4784]: I0123 06:38:43.407221 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" podStartSLOduration=5.9910760320000005 podStartE2EDuration="1m19.407191503s" podCreationTimestamp="2026-01-23 06:37:24 +0000 UTC" firstStartedPulling="2026-01-23 06:37:28.813867059 +0000 UTC m=+1052.046375033" lastFinishedPulling="2026-01-23 06:38:42.2299825 +0000 UTC m=+1125.462490504" observedRunningTime="2026-01-23 06:38:43.406719731 +0000 UTC m=+1126.639227705" watchObservedRunningTime="2026-01-23 06:38:43.407191503 +0000 UTC m=+1126.639699517" Jan 23 06:38:53 crc kubenswrapper[4784]: I0123 06:38:53.603894 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:38:53 crc kubenswrapper[4784]: I0123 06:38:53.604930 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:38:55 crc kubenswrapper[4784]: I0123 06:38:55.564955 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.347923 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k2mm9"] Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.349670 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-k2mm9" Jan 23 06:39:14 crc kubenswrapper[4784]: W0123 06:39:14.358135 4784 reflector.go:561] object-"openstack"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 23 06:39:14 crc kubenswrapper[4784]: E0123 06:39:14.358214 4784 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 06:39:14 crc kubenswrapper[4784]: W0123 06:39:14.358285 4784 reflector.go:561] object-"openstack"/"dnsmasq-dns-dockercfg-4v9xx": failed to list *v1.Secret: secrets "dnsmasq-dns-dockercfg-4v9xx" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 23 06:39:14 crc kubenswrapper[4784]: E0123 06:39:14.358300 4784 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dnsmasq-dns-dockercfg-4v9xx\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"dnsmasq-dns-dockercfg-4v9xx\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 06:39:14 crc kubenswrapper[4784]: W0123 06:39:14.358301 4784 reflector.go:561] object-"openstack"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 23 06:39:14 crc kubenswrapper[4784]: W0123 06:39:14.358356 4784 reflector.go:561] object-"openstack"/"dns": failed to list *v1.ConfigMap: configmaps "dns" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 23 06:39:14 crc kubenswrapper[4784]: E0123 06:39:14.358370 4784 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"dns\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 06:39:14 crc kubenswrapper[4784]: E0123 06:39:14.358361 4784 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.386384 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k2mm9"] Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.427597 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0df1bbce-7b12-4893-b602-871a9de74fab-config\") pod \"dnsmasq-dns-675f4bcbfc-k2mm9\" (UID: \"0df1bbce-7b12-4893-b602-871a9de74fab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k2mm9" Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.427815 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqbn7\" (UniqueName: \"kubernetes.io/projected/0df1bbce-7b12-4893-b602-871a9de74fab-kube-api-access-pqbn7\") pod \"dnsmasq-dns-675f4bcbfc-k2mm9\" (UID: \"0df1bbce-7b12-4893-b602-871a9de74fab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k2mm9" Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.529955 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0df1bbce-7b12-4893-b602-871a9de74fab-config\") pod \"dnsmasq-dns-675f4bcbfc-k2mm9\" (UID: \"0df1bbce-7b12-4893-b602-871a9de74fab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k2mm9" Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.530062 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqbn7\" (UniqueName: \"kubernetes.io/projected/0df1bbce-7b12-4893-b602-871a9de74fab-kube-api-access-pqbn7\") pod \"dnsmasq-dns-675f4bcbfc-k2mm9\" (UID: \"0df1bbce-7b12-4893-b602-871a9de74fab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k2mm9" Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.705605 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fbq7r"] Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.708706 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.727840 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.733343 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjzld\" (UniqueName: \"kubernetes.io/projected/5c97570e-8426-4d44-af59-a556532589c6-kube-api-access-pjzld\") pod \"dnsmasq-dns-78dd6ddcc-fbq7r\" (UID: \"5c97570e-8426-4d44-af59-a556532589c6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.733660 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c97570e-8426-4d44-af59-a556532589c6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-fbq7r\" (UID: \"5c97570e-8426-4d44-af59-a556532589c6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.733817 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c97570e-8426-4d44-af59-a556532589c6-config\") pod \"dnsmasq-dns-78dd6ddcc-fbq7r\" (UID: \"5c97570e-8426-4d44-af59-a556532589c6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.753182 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fbq7r"] Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.848822 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c97570e-8426-4d44-af59-a556532589c6-config\") pod \"dnsmasq-dns-78dd6ddcc-fbq7r\" (UID: \"5c97570e-8426-4d44-af59-a556532589c6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.848917 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjzld\" (UniqueName: \"kubernetes.io/projected/5c97570e-8426-4d44-af59-a556532589c6-kube-api-access-pjzld\") pod \"dnsmasq-dns-78dd6ddcc-fbq7r\" (UID: \"5c97570e-8426-4d44-af59-a556532589c6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.848985 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c97570e-8426-4d44-af59-a556532589c6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-fbq7r\" (UID: \"5c97570e-8426-4d44-af59-a556532589c6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" Jan 23 06:39:14 crc kubenswrapper[4784]: I0123 06:39:14.867866 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c97570e-8426-4d44-af59-a556532589c6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-fbq7r\" (UID: \"5c97570e-8426-4d44-af59-a556532589c6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" Jan 23 06:39:15 crc kubenswrapper[4784]: I0123 06:39:15.214927 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 23 06:39:15 crc kubenswrapper[4784]: E0123 06:39:15.530858 4784 configmap.go:193] Couldn't get configMap openstack/dns: failed to sync configmap cache: timed out waiting for the condition Jan 23 06:39:15 crc kubenswrapper[4784]: E0123 06:39:15.530958 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0df1bbce-7b12-4893-b602-871a9de74fab-config podName:0df1bbce-7b12-4893-b602-871a9de74fab nodeName:}" failed. No retries permitted until 2026-01-23 06:39:16.030938313 +0000 UTC m=+1159.263446287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0df1bbce-7b12-4893-b602-871a9de74fab-config") pod "dnsmasq-dns-675f4bcbfc-k2mm9" (UID: "0df1bbce-7b12-4893-b602-871a9de74fab") : failed to sync configmap cache: timed out waiting for the condition Jan 23 06:39:15 crc kubenswrapper[4784]: I0123 06:39:15.646687 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-4v9xx" Jan 23 06:39:15 crc kubenswrapper[4784]: E0123 06:39:15.674407 4784 projected.go:288] Couldn't get configMap openstack/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 06:39:15 crc kubenswrapper[4784]: E0123 06:39:15.674480 4784 projected.go:194] Error preparing data for projected volume kube-api-access-pqbn7 for pod openstack/dnsmasq-dns-675f4bcbfc-k2mm9: failed to sync configmap cache: timed out waiting for the condition Jan 23 06:39:15 crc kubenswrapper[4784]: E0123 06:39:15.674560 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0df1bbce-7b12-4893-b602-871a9de74fab-kube-api-access-pqbn7 podName:0df1bbce-7b12-4893-b602-871a9de74fab nodeName:}" failed. No retries permitted until 2026-01-23 06:39:16.174533193 +0000 UTC m=+1159.407041167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pqbn7" (UniqueName: "kubernetes.io/projected/0df1bbce-7b12-4893-b602-871a9de74fab-kube-api-access-pqbn7") pod "dnsmasq-dns-675f4bcbfc-k2mm9" (UID: "0df1bbce-7b12-4893-b602-871a9de74fab") : failed to sync configmap cache: timed out waiting for the condition Jan 23 06:39:15 crc kubenswrapper[4784]: I0123 06:39:15.816258 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 23 06:39:15 crc kubenswrapper[4784]: I0123 06:39:15.830936 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjzld\" (UniqueName: \"kubernetes.io/projected/5c97570e-8426-4d44-af59-a556532589c6-kube-api-access-pjzld\") pod \"dnsmasq-dns-78dd6ddcc-fbq7r\" (UID: \"5c97570e-8426-4d44-af59-a556532589c6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" Jan 23 06:39:15 crc kubenswrapper[4784]: E0123 06:39:15.849869 4784 configmap.go:193] Couldn't get configMap openstack/dns: failed to sync configmap cache: timed out waiting for the condition Jan 23 06:39:15 crc kubenswrapper[4784]: E0123 06:39:15.849983 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c97570e-8426-4d44-af59-a556532589c6-config podName:5c97570e-8426-4d44-af59-a556532589c6 nodeName:}" failed. No retries permitted until 2026-01-23 06:39:16.349958074 +0000 UTC m=+1159.582466048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5c97570e-8426-4d44-af59-a556532589c6-config") pod "dnsmasq-dns-78dd6ddcc-fbq7r" (UID: "5c97570e-8426-4d44-af59-a556532589c6") : failed to sync configmap cache: timed out waiting for the condition Jan 23 06:39:15 crc kubenswrapper[4784]: I0123 06:39:15.863991 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.118347 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0df1bbce-7b12-4893-b602-871a9de74fab-config\") pod \"dnsmasq-dns-675f4bcbfc-k2mm9\" (UID: \"0df1bbce-7b12-4893-b602-871a9de74fab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k2mm9" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.119435 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0df1bbce-7b12-4893-b602-871a9de74fab-config\") pod \"dnsmasq-dns-675f4bcbfc-k2mm9\" (UID: \"0df1bbce-7b12-4893-b602-871a9de74fab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k2mm9" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.221222 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqbn7\" (UniqueName: \"kubernetes.io/projected/0df1bbce-7b12-4893-b602-871a9de74fab-kube-api-access-pqbn7\") pod \"dnsmasq-dns-675f4bcbfc-k2mm9\" (UID: \"0df1bbce-7b12-4893-b602-871a9de74fab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k2mm9" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.228555 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqbn7\" (UniqueName: \"kubernetes.io/projected/0df1bbce-7b12-4893-b602-871a9de74fab-kube-api-access-pqbn7\") pod \"dnsmasq-dns-675f4bcbfc-k2mm9\" (UID: \"0df1bbce-7b12-4893-b602-871a9de74fab\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k2mm9" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.425072 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c97570e-8426-4d44-af59-a556532589c6-config\") pod \"dnsmasq-dns-78dd6ddcc-fbq7r\" (UID: \"5c97570e-8426-4d44-af59-a556532589c6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.426184 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c97570e-8426-4d44-af59-a556532589c6-config\") pod \"dnsmasq-dns-78dd6ddcc-fbq7r\" (UID: \"5c97570e-8426-4d44-af59-a556532589c6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.476344 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-k2mm9" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.534412 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.696256 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k2mm9"] Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.756236 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-55djd"] Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.758921 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-55djd" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.842468 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f18bc06-83da-4643-b856-3b0d700a1af8-config\") pod \"dnsmasq-dns-666b6646f7-55djd\" (UID: \"0f18bc06-83da-4643-b856-3b0d700a1af8\") " pod="openstack/dnsmasq-dns-666b6646f7-55djd" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.843026 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmh6h\" (UniqueName: \"kubernetes.io/projected/0f18bc06-83da-4643-b856-3b0d700a1af8-kube-api-access-vmh6h\") pod \"dnsmasq-dns-666b6646f7-55djd\" (UID: \"0f18bc06-83da-4643-b856-3b0d700a1af8\") " pod="openstack/dnsmasq-dns-666b6646f7-55djd" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.843066 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f18bc06-83da-4643-b856-3b0d700a1af8-dns-svc\") pod \"dnsmasq-dns-666b6646f7-55djd\" (UID: \"0f18bc06-83da-4643-b856-3b0d700a1af8\") " pod="openstack/dnsmasq-dns-666b6646f7-55djd" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.868830 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-55djd"] Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.955960 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f18bc06-83da-4643-b856-3b0d700a1af8-config\") pod \"dnsmasq-dns-666b6646f7-55djd\" (UID: \"0f18bc06-83da-4643-b856-3b0d700a1af8\") " pod="openstack/dnsmasq-dns-666b6646f7-55djd" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.956011 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmh6h\" (UniqueName: \"kubernetes.io/projected/0f18bc06-83da-4643-b856-3b0d700a1af8-kube-api-access-vmh6h\") pod \"dnsmasq-dns-666b6646f7-55djd\" (UID: \"0f18bc06-83da-4643-b856-3b0d700a1af8\") " pod="openstack/dnsmasq-dns-666b6646f7-55djd" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.956043 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f18bc06-83da-4643-b856-3b0d700a1af8-dns-svc\") pod \"dnsmasq-dns-666b6646f7-55djd\" (UID: \"0f18bc06-83da-4643-b856-3b0d700a1af8\") " pod="openstack/dnsmasq-dns-666b6646f7-55djd" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.956917 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f18bc06-83da-4643-b856-3b0d700a1af8-dns-svc\") pod \"dnsmasq-dns-666b6646f7-55djd\" (UID: \"0f18bc06-83da-4643-b856-3b0d700a1af8\") " pod="openstack/dnsmasq-dns-666b6646f7-55djd" Jan 23 06:39:16 crc kubenswrapper[4784]: I0123 06:39:16.957409 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f18bc06-83da-4643-b856-3b0d700a1af8-config\") pod \"dnsmasq-dns-666b6646f7-55djd\" (UID: \"0f18bc06-83da-4643-b856-3b0d700a1af8\") " pod="openstack/dnsmasq-dns-666b6646f7-55djd" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.034831 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmh6h\" (UniqueName: \"kubernetes.io/projected/0f18bc06-83da-4643-b856-3b0d700a1af8-kube-api-access-vmh6h\") pod \"dnsmasq-dns-666b6646f7-55djd\" (UID: \"0f18bc06-83da-4643-b856-3b0d700a1af8\") " pod="openstack/dnsmasq-dns-666b6646f7-55djd" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.169691 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-55djd" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.300982 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fbq7r"] Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.343802 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jnd56"] Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.345338 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.357066 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jnd56"] Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.467929 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7aff6ba-048f-4e88-b924-58072427ab1e-config\") pod \"dnsmasq-dns-57d769cc4f-jnd56\" (UID: \"a7aff6ba-048f-4e88-b924-58072427ab1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.468178 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7aff6ba-048f-4e88-b924-58072427ab1e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jnd56\" (UID: \"a7aff6ba-048f-4e88-b924-58072427ab1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.468312 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-286x6\" (UniqueName: \"kubernetes.io/projected/a7aff6ba-048f-4e88-b924-58072427ab1e-kube-api-access-286x6\") pod \"dnsmasq-dns-57d769cc4f-jnd56\" (UID: \"a7aff6ba-048f-4e88-b924-58072427ab1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.501282 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fbq7r"] Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.570484 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7aff6ba-048f-4e88-b924-58072427ab1e-config\") pod \"dnsmasq-dns-57d769cc4f-jnd56\" (UID: \"a7aff6ba-048f-4e88-b924-58072427ab1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.570574 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7aff6ba-048f-4e88-b924-58072427ab1e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jnd56\" (UID: \"a7aff6ba-048f-4e88-b924-58072427ab1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.570616 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-286x6\" (UniqueName: \"kubernetes.io/projected/a7aff6ba-048f-4e88-b924-58072427ab1e-kube-api-access-286x6\") pod \"dnsmasq-dns-57d769cc4f-jnd56\" (UID: \"a7aff6ba-048f-4e88-b924-58072427ab1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.575460 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7aff6ba-048f-4e88-b924-58072427ab1e-config\") pod \"dnsmasq-dns-57d769cc4f-jnd56\" (UID: \"a7aff6ba-048f-4e88-b924-58072427ab1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.577272 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7aff6ba-048f-4e88-b924-58072427ab1e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jnd56\" (UID: \"a7aff6ba-048f-4e88-b924-58072427ab1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.594309 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k2mm9"] Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.611397 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-286x6\" (UniqueName: \"kubernetes.io/projected/a7aff6ba-048f-4e88-b924-58072427ab1e-kube-api-access-286x6\") pod \"dnsmasq-dns-57d769cc4f-jnd56\" (UID: \"a7aff6ba-048f-4e88-b924-58072427ab1e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.631232 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-55djd"] Jan 23 06:39:17 crc kubenswrapper[4784]: W0123 06:39:17.635787 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f18bc06_83da_4643_b856_3b0d700a1af8.slice/crio-4e9b21457aa545efa450ddb2ff28208ae40632a92ce561a34dbff731d47eb72e WatchSource:0}: Error finding container 4e9b21457aa545efa450ddb2ff28208ae40632a92ce561a34dbff731d47eb72e: Status 404 returned error can't find the container with id 4e9b21457aa545efa450ddb2ff28208ae40632a92ce561a34dbff731d47eb72e Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.674388 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.720461 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-55djd" event={"ID":"0f18bc06-83da-4643-b856-3b0d700a1af8","Type":"ContainerStarted","Data":"4e9b21457aa545efa450ddb2ff28208ae40632a92ce561a34dbff731d47eb72e"} Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.722372 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-k2mm9" event={"ID":"0df1bbce-7b12-4893-b602-871a9de74fab","Type":"ContainerStarted","Data":"17cb18848749919d8cc6c83a6c18a34c164eebcc2dcdf4d2f85831859c748cf9"} Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.723482 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" event={"ID":"5c97570e-8426-4d44-af59-a556532589c6","Type":"ContainerStarted","Data":"7bf28ea31400aa980262b120f625ad46b6354fa0bb418794438b67237236ff93"} Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.980801 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.982956 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.990780 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.991166 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.991359 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.991442 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.991354 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-wnnn7" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.991864 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.992218 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 06:39:17 crc kubenswrapper[4784]: I0123 06:39:17.993477 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.086392 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.086477 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.086506 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e79eab6-cf02-4c69-99bd-2f3512c809f3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.086538 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.086564 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.086590 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e79eab6-cf02-4c69-99bd-2f3512c809f3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.086617 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.086663 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.086693 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-config-data\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.086730 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.086771 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkqgh\" (UniqueName: \"kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-kube-api-access-hkqgh\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.191674 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-config-data\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.191766 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.191789 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkqgh\" (UniqueName: \"kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-kube-api-access-hkqgh\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.191845 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.191889 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.191909 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e79eab6-cf02-4c69-99bd-2f3512c809f3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.191939 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.191959 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.191985 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e79eab6-cf02-4c69-99bd-2f3512c809f3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.192006 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.192057 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.192529 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.192824 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-config-data\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.193005 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.194601 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.201679 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.203023 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.204460 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e79eab6-cf02-4c69-99bd-2f3512c809f3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.225114 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e79eab6-cf02-4c69-99bd-2f3512c809f3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.233786 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.239295 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.239832 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.240657 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkqgh\" (UniqueName: \"kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-kube-api-access-hkqgh\") pod \"rabbitmq-server-0\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.284649 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jnd56"] Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.333281 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.497779 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.499044 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.501201 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.501872 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.502941 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.503102 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.503375 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.503491 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.503610 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fpvh8" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.522977 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.598142 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e37da8a-e964-4f8b-aacc-2937130e2e7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.598224 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e37da8a-e964-4f8b-aacc-2937130e2e7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.598297 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.598326 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.598364 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.598412 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.598431 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.598449 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.598473 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.598494 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.598513 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjsld\" (UniqueName: \"kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-kube-api-access-jjsld\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.700211 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.700271 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.700296 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.700327 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.700353 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.700374 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjsld\" (UniqueName: \"kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-kube-api-access-jjsld\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.700395 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e37da8a-e964-4f8b-aacc-2937130e2e7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.700428 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e37da8a-e964-4f8b-aacc-2937130e2e7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.700468 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.700487 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.700505 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.700786 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.701644 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.702107 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.702244 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.703411 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.703995 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.705834 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e37da8a-e964-4f8b-aacc-2937130e2e7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.706798 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e37da8a-e964-4f8b-aacc-2937130e2e7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.708023 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.710017 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.719962 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjsld\" (UniqueName: \"kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-kube-api-access-jjsld\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.734285 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" event={"ID":"a7aff6ba-048f-4e88-b924-58072427ab1e","Type":"ContainerStarted","Data":"87455a13ae19a7c78b46b1d4c39c68c8e3e9202d6b1734a07ed81af6bf233319"} Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.747590 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:18 crc kubenswrapper[4784]: I0123 06:39:18.842804 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.098378 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.210917 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.212870 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.219698 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.226668 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.251679 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.253330 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-wngmb" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.256269 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.257362 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.377434 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/85680fc8-18ee-4984-8bdb-a489d1e71d39-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.377500 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j94b6\" (UniqueName: \"kubernetes.io/projected/85680fc8-18ee-4984-8bdb-a489d1e71d39-kube-api-access-j94b6\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.377559 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/85680fc8-18ee-4984-8bdb-a489d1e71d39-config-data-generated\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.377600 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.377629 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/85680fc8-18ee-4984-8bdb-a489d1e71d39-config-data-default\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.377727 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85680fc8-18ee-4984-8bdb-a489d1e71d39-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.378264 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/85680fc8-18ee-4984-8bdb-a489d1e71d39-kolla-config\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.378301 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85680fc8-18ee-4984-8bdb-a489d1e71d39-operator-scripts\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.491717 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/85680fc8-18ee-4984-8bdb-a489d1e71d39-config-data-default\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.491836 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85680fc8-18ee-4984-8bdb-a489d1e71d39-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.491880 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/85680fc8-18ee-4984-8bdb-a489d1e71d39-kolla-config\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.491900 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85680fc8-18ee-4984-8bdb-a489d1e71d39-operator-scripts\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.491941 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/85680fc8-18ee-4984-8bdb-a489d1e71d39-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.491963 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j94b6\" (UniqueName: \"kubernetes.io/projected/85680fc8-18ee-4984-8bdb-a489d1e71d39-kube-api-access-j94b6\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.491992 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/85680fc8-18ee-4984-8bdb-a489d1e71d39-config-data-generated\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.492029 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.492375 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.497315 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85680fc8-18ee-4984-8bdb-a489d1e71d39-operator-scripts\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.498105 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/85680fc8-18ee-4984-8bdb-a489d1e71d39-config-data-default\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.515004 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/85680fc8-18ee-4984-8bdb-a489d1e71d39-kolla-config\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.518097 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/85680fc8-18ee-4984-8bdb-a489d1e71d39-config-data-generated\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.540406 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j94b6\" (UniqueName: \"kubernetes.io/projected/85680fc8-18ee-4984-8bdb-a489d1e71d39-kube-api-access-j94b6\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.543300 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85680fc8-18ee-4984-8bdb-a489d1e71d39-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.543452 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/85680fc8-18ee-4984-8bdb-a489d1e71d39-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.587017 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"85680fc8-18ee-4984-8bdb-a489d1e71d39\") " pod="openstack/openstack-galera-0" Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.789354 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e79eab6-cf02-4c69-99bd-2f3512c809f3","Type":"ContainerStarted","Data":"bde2b7726d6cec3065359ae895f20b7d9c28facc9ecaa12e10a1d2f8510e3391"} Jan 23 06:39:19 crc kubenswrapper[4784]: I0123 06:39:19.856049 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 06:39:20 crc kubenswrapper[4784]: I0123 06:39:20.150187 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:39:20 crc kubenswrapper[4784]: I0123 06:39:20.881676 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 06:39:20 crc kubenswrapper[4784]: I0123 06:39:20.884088 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:20 crc kubenswrapper[4784]: I0123 06:39:20.890548 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-bk8b9" Jan 23 06:39:20 crc kubenswrapper[4784]: I0123 06:39:20.891047 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 23 06:39:20 crc kubenswrapper[4784]: I0123 06:39:20.891255 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 23 06:39:20 crc kubenswrapper[4784]: I0123 06:39:20.892383 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 23 06:39:20 crc kubenswrapper[4784]: I0123 06:39:20.894607 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.019868 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.020598 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.021329 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhvld\" (UniqueName: \"kubernetes.io/projected/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-kube-api-access-rhvld\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.022271 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.022328 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.022410 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.022517 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.022590 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.087593 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e37da8a-e964-4f8b-aacc-2937130e2e7b","Type":"ContainerStarted","Data":"1ee8cfee12fbbce40e85fbc58a30a10ff8b2da4298a5135b625b9a0d82b12e56"} Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.111627 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.125827 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.125900 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.125934 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.126009 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhvld\" (UniqueName: \"kubernetes.io/projected/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-kube-api-access-rhvld\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.126039 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.126061 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.126096 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.126142 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.127099 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.128529 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.129458 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.130825 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.138075 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.144579 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.165722 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.169846 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhvld\" (UniqueName: \"kubernetes.io/projected/8f66f97d-f8a6-4316-ba8b-cbbd922a1655-kube-api-access-rhvld\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.175919 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8f66f97d-f8a6-4316-ba8b-cbbd922a1655\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.216668 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.219537 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.233183 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.233482 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-tp2q5" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.240197 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.295218 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.347652 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8477c9f-b8db-4b9e-bf60-1a614700e001-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e8477c9f-b8db-4b9e-bf60-1a614700e001\") " pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.347711 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e8477c9f-b8db-4b9e-bf60-1a614700e001-kolla-config\") pod \"memcached-0\" (UID: \"e8477c9f-b8db-4b9e-bf60-1a614700e001\") " pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.347745 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8477c9f-b8db-4b9e-bf60-1a614700e001-config-data\") pod \"memcached-0\" (UID: \"e8477c9f-b8db-4b9e-bf60-1a614700e001\") " pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.347789 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8477c9f-b8db-4b9e-bf60-1a614700e001-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e8477c9f-b8db-4b9e-bf60-1a614700e001\") " pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.347833 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2r2s\" (UniqueName: \"kubernetes.io/projected/e8477c9f-b8db-4b9e-bf60-1a614700e001-kube-api-access-m2r2s\") pod \"memcached-0\" (UID: \"e8477c9f-b8db-4b9e-bf60-1a614700e001\") " pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.452421 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8477c9f-b8db-4b9e-bf60-1a614700e001-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e8477c9f-b8db-4b9e-bf60-1a614700e001\") " pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.452500 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e8477c9f-b8db-4b9e-bf60-1a614700e001-kolla-config\") pod \"memcached-0\" (UID: \"e8477c9f-b8db-4b9e-bf60-1a614700e001\") " pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.452535 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8477c9f-b8db-4b9e-bf60-1a614700e001-config-data\") pod \"memcached-0\" (UID: \"e8477c9f-b8db-4b9e-bf60-1a614700e001\") " pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.452592 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8477c9f-b8db-4b9e-bf60-1a614700e001-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e8477c9f-b8db-4b9e-bf60-1a614700e001\") " pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.452637 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2r2s\" (UniqueName: \"kubernetes.io/projected/e8477c9f-b8db-4b9e-bf60-1a614700e001-kube-api-access-m2r2s\") pod \"memcached-0\" (UID: \"e8477c9f-b8db-4b9e-bf60-1a614700e001\") " pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.456871 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8477c9f-b8db-4b9e-bf60-1a614700e001-config-data\") pod \"memcached-0\" (UID: \"e8477c9f-b8db-4b9e-bf60-1a614700e001\") " pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.457556 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e8477c9f-b8db-4b9e-bf60-1a614700e001-kolla-config\") pod \"memcached-0\" (UID: \"e8477c9f-b8db-4b9e-bf60-1a614700e001\") " pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.469520 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8477c9f-b8db-4b9e-bf60-1a614700e001-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e8477c9f-b8db-4b9e-bf60-1a614700e001\") " pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.470101 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8477c9f-b8db-4b9e-bf60-1a614700e001-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e8477c9f-b8db-4b9e-bf60-1a614700e001\") " pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.474836 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2r2s\" (UniqueName: \"kubernetes.io/projected/e8477c9f-b8db-4b9e-bf60-1a614700e001-kube-api-access-m2r2s\") pod \"memcached-0\" (UID: \"e8477c9f-b8db-4b9e-bf60-1a614700e001\") " pod="openstack/memcached-0" Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.477642 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 06:39:21 crc kubenswrapper[4784]: I0123 06:39:21.589314 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 06:39:22 crc kubenswrapper[4784]: I0123 06:39:22.146164 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"85680fc8-18ee-4984-8bdb-a489d1e71d39","Type":"ContainerStarted","Data":"132fa56982a1a9d020180f0e6c1cd2be5d1e5123a596c60c9cc8c5313be298cd"} Jan 23 06:39:22 crc kubenswrapper[4784]: I0123 06:39:22.213233 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 06:39:22 crc kubenswrapper[4784]: I0123 06:39:22.566526 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 06:39:23 crc kubenswrapper[4784]: I0123 06:39:23.242597 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e8477c9f-b8db-4b9e-bf60-1a614700e001","Type":"ContainerStarted","Data":"ba9a2a546e9910ed6a47d40d62b4af5f1500e97e6d73abb660d4ebfe9ff2ca33"} Jan 23 06:39:23 crc kubenswrapper[4784]: I0123 06:39:23.381280 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8f66f97d-f8a6-4316-ba8b-cbbd922a1655","Type":"ContainerStarted","Data":"3d7c48761bc470d69199fdb26aa535c66bea88ed1f446c71595cbb7351055604"} Jan 23 06:39:23 crc kubenswrapper[4784]: I0123 06:39:23.576495 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:39:23 crc kubenswrapper[4784]: I0123 06:39:23.578378 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 06:39:23 crc kubenswrapper[4784]: I0123 06:39:23.584945 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-jjbk4" Jan 23 06:39:23 crc kubenswrapper[4784]: I0123 06:39:23.591436 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:39:23 crc kubenswrapper[4784]: I0123 06:39:23.607020 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:39:23 crc kubenswrapper[4784]: I0123 06:39:23.607103 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:39:23 crc kubenswrapper[4784]: I0123 06:39:23.701901 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntprq\" (UniqueName: \"kubernetes.io/projected/2c542d52-d20d-41d2-8b80-fb2a9bf5bafa-kube-api-access-ntprq\") pod \"kube-state-metrics-0\" (UID: \"2c542d52-d20d-41d2-8b80-fb2a9bf5bafa\") " pod="openstack/kube-state-metrics-0" Jan 23 06:39:23 crc kubenswrapper[4784]: I0123 06:39:23.804365 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntprq\" (UniqueName: \"kubernetes.io/projected/2c542d52-d20d-41d2-8b80-fb2a9bf5bafa-kube-api-access-ntprq\") pod \"kube-state-metrics-0\" (UID: \"2c542d52-d20d-41d2-8b80-fb2a9bf5bafa\") " pod="openstack/kube-state-metrics-0" Jan 23 06:39:23 crc kubenswrapper[4784]: I0123 06:39:23.827271 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntprq\" (UniqueName: \"kubernetes.io/projected/2c542d52-d20d-41d2-8b80-fb2a9bf5bafa-kube-api-access-ntprq\") pod \"kube-state-metrics-0\" (UID: \"2c542d52-d20d-41d2-8b80-fb2a9bf5bafa\") " pod="openstack/kube-state-metrics-0" Jan 23 06:39:23 crc kubenswrapper[4784]: I0123 06:39:23.926567 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 06:39:24 crc kubenswrapper[4784]: I0123 06:39:24.803445 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.071622 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.102997 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.103454 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.117164 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.118094 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.119742 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.120077 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.120156 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.120351 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bvsrx" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.122344 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.132390 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.249886 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.249954 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.250012 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.250034 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.250074 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/347f59fd-0378-4413-8880-7d7e9fe9a859-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.250124 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-config\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.250149 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6xcj\" (UniqueName: \"kubernetes.io/projected/347f59fd-0378-4413-8880-7d7e9fe9a859-kube-api-access-l6xcj\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.250172 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/347f59fd-0378-4413-8880-7d7e9fe9a859-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.250198 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.250228 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.355970 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.356057 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.357155 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.357287 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.357307 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.357337 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.357616 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.357650 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.357811 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/347f59fd-0378-4413-8880-7d7e9fe9a859-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.358003 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-config\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.358053 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6xcj\" (UniqueName: \"kubernetes.io/projected/347f59fd-0378-4413-8880-7d7e9fe9a859-kube-api-access-l6xcj\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.358107 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/347f59fd-0378-4413-8880-7d7e9fe9a859-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.359407 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.365397 4784 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.365639 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/984fdf672f705a078d51f1b73c390067f647610423a2c84302a50834be3d8ee1/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.403893 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/347f59fd-0378-4413-8880-7d7e9fe9a859-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.411037 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6xcj\" (UniqueName: \"kubernetes.io/projected/347f59fd-0378-4413-8880-7d7e9fe9a859-kube-api-access-l6xcj\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.416850 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2c542d52-d20d-41d2-8b80-fb2a9bf5bafa","Type":"ContainerStarted","Data":"851c61258df9ce6aed9cdea63dcdfe3ef9704a8f0d6eb006a79e5111c6a26dc1"} Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.455039 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-config\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.559449 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.566800 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.585580 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/347f59fd-0378-4413-8880-7d7e9fe9a859-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.740883 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") pod \"prometheus-metric-storage-0\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:25 crc kubenswrapper[4784]: I0123 06:39:25.794374 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.918004 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sj5dx"] Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.920131 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.923855 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-jbcl4" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.924445 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.924512 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.928974 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-k5dcn"] Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.931142 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.958083 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sj5dx"] Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.968780 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d8e3d77-6347-49cf-9ffa-335c063b8f12-var-run\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.968839 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d8e3d77-6347-49cf-9ffa-335c063b8f12-var-run-ovn\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.968863 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9852b9db-9435-4bdd-a282-7727fd01a651-var-lib\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.968933 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d8e3d77-6347-49cf-9ffa-335c063b8f12-ovn-controller-tls-certs\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.968959 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9852b9db-9435-4bdd-a282-7727fd01a651-var-log\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.968985 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9852b9db-9435-4bdd-a282-7727fd01a651-scripts\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.969007 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d8e3d77-6347-49cf-9ffa-335c063b8f12-var-log-ovn\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.969025 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d8e3d77-6347-49cf-9ffa-335c063b8f12-scripts\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.969054 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9852b9db-9435-4bdd-a282-7727fd01a651-var-run\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.969079 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8e3d77-6347-49cf-9ffa-335c063b8f12-combined-ca-bundle\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.969105 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq7cp\" (UniqueName: \"kubernetes.io/projected/0d8e3d77-6347-49cf-9ffa-335c063b8f12-kube-api-access-qq7cp\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.969129 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9852b9db-9435-4bdd-a282-7727fd01a651-etc-ovs\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:26 crc kubenswrapper[4784]: I0123 06:39:26.969151 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz2br\" (UniqueName: \"kubernetes.io/projected/9852b9db-9435-4bdd-a282-7727fd01a651-kube-api-access-cz2br\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.013328 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-k5dcn"] Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.040362 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.071974 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9852b9db-9435-4bdd-a282-7727fd01a651-var-lib\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.072091 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d8e3d77-6347-49cf-9ffa-335c063b8f12-ovn-controller-tls-certs\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.072113 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9852b9db-9435-4bdd-a282-7727fd01a651-var-log\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.072158 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9852b9db-9435-4bdd-a282-7727fd01a651-scripts\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.072178 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d8e3d77-6347-49cf-9ffa-335c063b8f12-var-log-ovn\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.072212 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d8e3d77-6347-49cf-9ffa-335c063b8f12-scripts\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.072242 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9852b9db-9435-4bdd-a282-7727fd01a651-var-run\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.072259 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8e3d77-6347-49cf-9ffa-335c063b8f12-combined-ca-bundle\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.072285 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq7cp\" (UniqueName: \"kubernetes.io/projected/0d8e3d77-6347-49cf-9ffa-335c063b8f12-kube-api-access-qq7cp\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.072310 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9852b9db-9435-4bdd-a282-7727fd01a651-etc-ovs\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.072331 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz2br\" (UniqueName: \"kubernetes.io/projected/9852b9db-9435-4bdd-a282-7727fd01a651-kube-api-access-cz2br\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.072360 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d8e3d77-6347-49cf-9ffa-335c063b8f12-var-run\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.072376 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d8e3d77-6347-49cf-9ffa-335c063b8f12-var-run-ovn\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.073057 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d8e3d77-6347-49cf-9ffa-335c063b8f12-var-run-ovn\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.081016 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9852b9db-9435-4bdd-a282-7727fd01a651-var-lib\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.081185 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9852b9db-9435-4bdd-a282-7727fd01a651-var-run\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.082715 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9852b9db-9435-4bdd-a282-7727fd01a651-var-log\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.082831 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d8e3d77-6347-49cf-9ffa-335c063b8f12-var-log-ovn\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.083342 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9852b9db-9435-4bdd-a282-7727fd01a651-etc-ovs\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.078625 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d8e3d77-6347-49cf-9ffa-335c063b8f12-scripts\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.083420 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d8e3d77-6347-49cf-9ffa-335c063b8f12-var-run\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.100045 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9852b9db-9435-4bdd-a282-7727fd01a651-scripts\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.101091 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d8e3d77-6347-49cf-9ffa-335c063b8f12-ovn-controller-tls-certs\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.111164 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq7cp\" (UniqueName: \"kubernetes.io/projected/0d8e3d77-6347-49cf-9ffa-335c063b8f12-kube-api-access-qq7cp\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.111645 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.111765 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.111979 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz2br\" (UniqueName: \"kubernetes.io/projected/9852b9db-9435-4bdd-a282-7727fd01a651-kube-api-access-cz2br\") pod \"ovn-controller-ovs-k5dcn\" (UID: \"9852b9db-9435-4bdd-a282-7727fd01a651\") " pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.116204 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.116581 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-dp2x9" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.116827 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.118642 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.121668 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.131308 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8e3d77-6347-49cf-9ffa-335c063b8f12-combined-ca-bundle\") pod \"ovn-controller-sj5dx\" (UID: \"0d8e3d77-6347-49cf-9ffa-335c063b8f12\") " pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.154584 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.266882 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sj5dx" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.288268 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2317a2c2-318f-46c1-98d0-61c93c840b91-config\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.288335 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2317a2c2-318f-46c1-98d0-61c93c840b91-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.288451 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2317a2c2-318f-46c1-98d0-61c93c840b91-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.288518 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2317a2c2-318f-46c1-98d0-61c93c840b91-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.288554 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4drg\" (UniqueName: \"kubernetes.io/projected/2317a2c2-318f-46c1-98d0-61c93c840b91-kube-api-access-k4drg\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.289075 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.289116 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2317a2c2-318f-46c1-98d0-61c93c840b91-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.289166 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2317a2c2-318f-46c1-98d0-61c93c840b91-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.291732 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.392146 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2317a2c2-318f-46c1-98d0-61c93c840b91-config\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.392233 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2317a2c2-318f-46c1-98d0-61c93c840b91-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.392311 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2317a2c2-318f-46c1-98d0-61c93c840b91-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.392371 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2317a2c2-318f-46c1-98d0-61c93c840b91-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.392416 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4drg\" (UniqueName: \"kubernetes.io/projected/2317a2c2-318f-46c1-98d0-61c93c840b91-kube-api-access-k4drg\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.392481 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2317a2c2-318f-46c1-98d0-61c93c840b91-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.392509 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.392536 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2317a2c2-318f-46c1-98d0-61c93c840b91-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.395512 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2317a2c2-318f-46c1-98d0-61c93c840b91-config\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.401799 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2317a2c2-318f-46c1-98d0-61c93c840b91-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.402242 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.403394 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2317a2c2-318f-46c1-98d0-61c93c840b91-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.404232 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2317a2c2-318f-46c1-98d0-61c93c840b91-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.411445 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2317a2c2-318f-46c1-98d0-61c93c840b91-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.422816 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4drg\" (UniqueName: \"kubernetes.io/projected/2317a2c2-318f-46c1-98d0-61c93c840b91-kube-api-access-k4drg\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.434316 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2317a2c2-318f-46c1-98d0-61c93c840b91-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.486127 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2317a2c2-318f-46c1-98d0-61c93c840b91\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.552108 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"347f59fd-0378-4413-8880-7d7e9fe9a859","Type":"ContainerStarted","Data":"c0e5ec172879fa36199cac02ae495b5dee2afc0ad356cf7a08b8e13f7d2aa98d"} Jan 23 06:39:27 crc kubenswrapper[4784]: I0123 06:39:27.824510 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 06:39:28 crc kubenswrapper[4784]: I0123 06:39:28.459387 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sj5dx"] Jan 23 06:39:28 crc kubenswrapper[4784]: I0123 06:39:28.960353 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-k5dcn"] Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.444667 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.458883 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-dhqg4"] Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.459953 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.466818 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.478475 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dhqg4"] Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.629918 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63a3df2f-490b-4cac-89f8-bec049380a07-config\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.630049 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/63a3df2f-490b-4cac-89f8-bec049380a07-ovn-rundir\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.632417 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/63a3df2f-490b-4cac-89f8-bec049380a07-ovs-rundir\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.632448 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a3df2f-490b-4cac-89f8-bec049380a07-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.632503 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9h2k\" (UniqueName: \"kubernetes.io/projected/63a3df2f-490b-4cac-89f8-bec049380a07-kube-api-access-h9h2k\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.632552 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a3df2f-490b-4cac-89f8-bec049380a07-combined-ca-bundle\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.738486 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63a3df2f-490b-4cac-89f8-bec049380a07-config\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.738548 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/63a3df2f-490b-4cac-89f8-bec049380a07-ovn-rundir\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.738607 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/63a3df2f-490b-4cac-89f8-bec049380a07-ovs-rundir\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.738630 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a3df2f-490b-4cac-89f8-bec049380a07-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.738663 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9h2k\" (UniqueName: \"kubernetes.io/projected/63a3df2f-490b-4cac-89f8-bec049380a07-kube-api-access-h9h2k\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.738696 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a3df2f-490b-4cac-89f8-bec049380a07-combined-ca-bundle\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.740253 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/63a3df2f-490b-4cac-89f8-bec049380a07-ovs-rundir\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.741113 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/63a3df2f-490b-4cac-89f8-bec049380a07-ovn-rundir\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.741214 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63a3df2f-490b-4cac-89f8-bec049380a07-config\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.754553 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a3df2f-490b-4cac-89f8-bec049380a07-combined-ca-bundle\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.754913 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a3df2f-490b-4cac-89f8-bec049380a07-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.762346 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9h2k\" (UniqueName: \"kubernetes.io/projected/63a3df2f-490b-4cac-89f8-bec049380a07-kube-api-access-h9h2k\") pod \"ovn-controller-metrics-dhqg4\" (UID: \"63a3df2f-490b-4cac-89f8-bec049380a07\") " pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.802995 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dhqg4" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.906479 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-55djd"] Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.938146 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rgshg"] Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.939791 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.944010 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.954680 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rgshg"] Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.989036 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lrtk\" (UniqueName: \"kubernetes.io/projected/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-kube-api-access-8lrtk\") pod \"dnsmasq-dns-7fd796d7df-rgshg\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.989098 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-rgshg\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.989123 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-rgshg\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:39:29 crc kubenswrapper[4784]: I0123 06:39:29.989170 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-config\") pod \"dnsmasq-dns-7fd796d7df-rgshg\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.091204 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-config\") pod \"dnsmasq-dns-7fd796d7df-rgshg\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.091332 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lrtk\" (UniqueName: \"kubernetes.io/projected/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-kube-api-access-8lrtk\") pod \"dnsmasq-dns-7fd796d7df-rgshg\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.091413 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-rgshg\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.091438 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-rgshg\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.092434 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-rgshg\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.097127 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-config\") pod \"dnsmasq-dns-7fd796d7df-rgshg\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.097219 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-rgshg\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.155480 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lrtk\" (UniqueName: \"kubernetes.io/projected/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-kube-api-access-8lrtk\") pod \"dnsmasq-dns-7fd796d7df-rgshg\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.288027 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.634318 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.637481 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.641079 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-8x5z7" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.641426 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.642024 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.647907 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.669059 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.824145 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.824182 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b59b602d-4a20-4b11-8577-d13582d30ce8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.824200 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b59b602d-4a20-4b11-8577-d13582d30ce8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.824253 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59b602d-4a20-4b11-8577-d13582d30ce8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.824279 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b59b602d-4a20-4b11-8577-d13582d30ce8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.824320 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b59b602d-4a20-4b11-8577-d13582d30ce8-config\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.824342 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b59b602d-4a20-4b11-8577-d13582d30ce8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.824385 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhpvq\" (UniqueName: \"kubernetes.io/projected/b59b602d-4a20-4b11-8577-d13582d30ce8-kube-api-access-dhpvq\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.939364 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b59b602d-4a20-4b11-8577-d13582d30ce8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.939480 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b59b602d-4a20-4b11-8577-d13582d30ce8-config\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.939523 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b59b602d-4a20-4b11-8577-d13582d30ce8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.939628 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhpvq\" (UniqueName: \"kubernetes.io/projected/b59b602d-4a20-4b11-8577-d13582d30ce8-kube-api-access-dhpvq\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.939682 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.939702 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b59b602d-4a20-4b11-8577-d13582d30ce8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.939721 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b59b602d-4a20-4b11-8577-d13582d30ce8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.939822 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59b602d-4a20-4b11-8577-d13582d30ce8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.940862 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b59b602d-4a20-4b11-8577-d13582d30ce8-config\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.943407 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.943715 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b59b602d-4a20-4b11-8577-d13582d30ce8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:30 crc kubenswrapper[4784]: I0123 06:39:30.944645 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b59b602d-4a20-4b11-8577-d13582d30ce8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:31 crc kubenswrapper[4784]: I0123 06:39:30.971809 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59b602d-4a20-4b11-8577-d13582d30ce8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:31 crc kubenswrapper[4784]: I0123 06:39:30.982380 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhpvq\" (UniqueName: \"kubernetes.io/projected/b59b602d-4a20-4b11-8577-d13582d30ce8-kube-api-access-dhpvq\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:31 crc kubenswrapper[4784]: I0123 06:39:31.038893 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b59b602d-4a20-4b11-8577-d13582d30ce8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:31 crc kubenswrapper[4784]: I0123 06:39:31.199116 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b59b602d-4a20-4b11-8577-d13582d30ce8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:31 crc kubenswrapper[4784]: I0123 06:39:31.285104 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b59b602d-4a20-4b11-8577-d13582d30ce8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:31 crc kubenswrapper[4784]: I0123 06:39:31.364330 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 06:39:33 crc kubenswrapper[4784]: W0123 06:39:33.818162 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d8e3d77_6347_49cf_9ffa_335c063b8f12.slice/crio-7254cbd68cc0bda047aac9d29847de67773c9ade287cdd4e288d637bbc78c5d7 WatchSource:0}: Error finding container 7254cbd68cc0bda047aac9d29847de67773c9ade287cdd4e288d637bbc78c5d7: Status 404 returned error can't find the container with id 7254cbd68cc0bda047aac9d29847de67773c9ade287cdd4e288d637bbc78c5d7 Jan 23 06:39:34 crc kubenswrapper[4784]: I0123 06:39:34.754408 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sj5dx" event={"ID":"0d8e3d77-6347-49cf-9ffa-335c063b8f12","Type":"ContainerStarted","Data":"7254cbd68cc0bda047aac9d29847de67773c9ade287cdd4e288d637bbc78c5d7"} Jan 23 06:39:34 crc kubenswrapper[4784]: I0123 06:39:34.756044 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2317a2c2-318f-46c1-98d0-61c93c840b91","Type":"ContainerStarted","Data":"ef90e2505dcb97977a0c6ab327603da9f45022353ddd49872d2bb9390f8dc192"} Jan 23 06:39:34 crc kubenswrapper[4784]: I0123 06:39:34.757628 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k5dcn" event={"ID":"9852b9db-9435-4bdd-a282-7727fd01a651","Type":"ContainerStarted","Data":"6b846b78b41c3b819ce6fb7f0efbf1e70ca93a2ae527b62a8555072315622f4c"} Jan 23 06:39:47 crc kubenswrapper[4784]: I0123 06:39:47.762298 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dhqg4"] Jan 23 06:39:48 crc kubenswrapper[4784]: I0123 06:39:48.918020 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dhqg4" event={"ID":"63a3df2f-490b-4cac-89f8-bec049380a07","Type":"ContainerStarted","Data":"b2b1c80d5ae1224f59a6305096d299107ab2147fe311c65f281a54f2d0eb521d"} Jan 23 06:39:52 crc kubenswrapper[4784]: E0123 06:39:52.568958 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 23 06:39:52 crc kubenswrapper[4784]: E0123 06:39:52.569793 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n6dh59bh5d6h55bhc8h5c6h57fh96h68bh5bch7h55dh587h67h586h557h97h5ddh5hc5h5b6h687h564h58h67dh545hfdh576h7dh545h79h695q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2r2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(e8477c9f-b8db-4b9e-bf60-1a614700e001): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:39:52 crc kubenswrapper[4784]: E0123 06:39:52.571259 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="e8477c9f-b8db-4b9e-bf60-1a614700e001" Jan 23 06:39:53 crc kubenswrapper[4784]: E0123 06:39:53.150587 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="e8477c9f-b8db-4b9e-bf60-1a614700e001" Jan 23 06:39:53 crc kubenswrapper[4784]: I0123 06:39:53.603075 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:39:53 crc kubenswrapper[4784]: I0123 06:39:53.603151 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:39:53 crc kubenswrapper[4784]: I0123 06:39:53.603202 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:39:53 crc kubenswrapper[4784]: I0123 06:39:53.604371 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d73b98a0e27924b52323e09dc829b98e1ffba0a17575fb7657392d46f6773c1"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:39:53 crc kubenswrapper[4784]: I0123 06:39:53.604448 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://7d73b98a0e27924b52323e09dc829b98e1ffba0a17575fb7657392d46f6773c1" gracePeriod=600 Jan 23 06:39:53 crc kubenswrapper[4784]: I0123 06:39:53.969613 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="7d73b98a0e27924b52323e09dc829b98e1ffba0a17575fb7657392d46f6773c1" exitCode=0 Jan 23 06:39:53 crc kubenswrapper[4784]: I0123 06:39:53.969681 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"7d73b98a0e27924b52323e09dc829b98e1ffba0a17575fb7657392d46f6773c1"} Jan 23 06:39:53 crc kubenswrapper[4784]: I0123 06:39:53.969739 4784 scope.go:117] "RemoveContainer" containerID="ba1cd80d1af05627cca4bf817be8d5ac071e1d0a3b4a67cef6e491a9167052a0" Jan 23 06:39:57 crc kubenswrapper[4784]: E0123 06:39:57.327511 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 23 06:39:57 crc kubenswrapper[4784]: E0123 06:39:57.328498 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j94b6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(85680fc8-18ee-4984-8bdb-a489d1e71d39): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:39:57 crc kubenswrapper[4784]: E0123 06:39:57.329729 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="85680fc8-18ee-4984-8bdb-a489d1e71d39" Jan 23 06:39:58 crc kubenswrapper[4784]: E0123 06:39:58.039669 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="85680fc8-18ee-4984-8bdb-a489d1e71d39" Jan 23 06:40:00 crc kubenswrapper[4784]: I0123 06:40:00.993154 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rgshg"] Jan 23 06:40:02 crc kubenswrapper[4784]: E0123 06:40:02.500793 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 23 06:40:02 crc kubenswrapper[4784]: E0123 06:40:02.501475 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkqgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(9e79eab6-cf02-4c69-99bd-2f3512c809f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:40:02 crc kubenswrapper[4784]: E0123 06:40:02.502721 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="9e79eab6-cf02-4c69-99bd-2f3512c809f3" Jan 23 06:40:03 crc kubenswrapper[4784]: E0123 06:40:03.087813 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="9e79eab6-cf02-4c69-99bd-2f3512c809f3" Jan 23 06:40:03 crc kubenswrapper[4784]: E0123 06:40:03.981570 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 23 06:40:03 crc kubenswrapper[4784]: E0123 06:40:03.981837 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhvld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(8f66f97d-f8a6-4316-ba8b-cbbd922a1655): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:40:03 crc kubenswrapper[4784]: E0123 06:40:03.983616 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Jan 23 06:40:03 crc kubenswrapper[4784]: E0123 06:40:03.983830 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66hf8h686h557h5fch66dh67dh678h594h578h68h84h658h5d7hd8h79h597h5bbh587h654h5b6h5b6h686h649h5bfh7h558h544h687h8bh65bh66dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cz2br,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-k5dcn_openstack(9852b9db-9435-4bdd-a282-7727fd01a651): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:40:03 crc kubenswrapper[4784]: E0123 06:40:03.983955 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="8f66f97d-f8a6-4316-ba8b-cbbd922a1655" Jan 23 06:40:03 crc kubenswrapper[4784]: E0123 06:40:03.985181 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-k5dcn" podUID="9852b9db-9435-4bdd-a282-7727fd01a651" Jan 23 06:40:03 crc kubenswrapper[4784]: E0123 06:40:03.997718 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 23 06:40:03 crc kubenswrapper[4784]: E0123 06:40:03.998310 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjsld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(9e37da8a-e964-4f8b-aacc-2937130e2e7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:40:03 crc kubenswrapper[4784]: E0123 06:40:03.999447 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9e37da8a-e964-4f8b-aacc-2937130e2e7b" Jan 23 06:40:04 crc kubenswrapper[4784]: E0123 06:40:04.105811 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="8f66f97d-f8a6-4316-ba8b-cbbd922a1655" Jan 23 06:40:04 crc kubenswrapper[4784]: E0123 06:40:04.115581 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9e37da8a-e964-4f8b-aacc-2937130e2e7b" Jan 23 06:40:04 crc kubenswrapper[4784]: E0123 06:40:04.121557 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-k5dcn" podUID="9852b9db-9435-4bdd-a282-7727fd01a651" Jan 23 06:40:04 crc kubenswrapper[4784]: I0123 06:40:04.399772 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 06:40:09 crc kubenswrapper[4784]: W0123 06:40:09.856030 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f489a74_d7ce_4b5f_90f9_a1075b8e6b97.slice/crio-6538cfb7e1af76f403522ed2a2c532f39235b7a440d03582843b843f9943dab9 WatchSource:0}: Error finding container 6538cfb7e1af76f403522ed2a2c532f39235b7a440d03582843b843f9943dab9: Status 404 returned error can't find the container with id 6538cfb7e1af76f403522ed2a2c532f39235b7a440d03582843b843f9943dab9 Jan 23 06:40:10 crc kubenswrapper[4784]: E0123 06:40:10.093193 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Jan 23 06:40:10 crc kubenswrapper[4784]: E0123 06:40:10.093436 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66hf8h686h557h5fch66dh67dh678h594h578h68h84h658h5d7hd8h79h597h5bbh587h654h5b6h5b6h686h649h5bfh7h558h544h687h8bh65bh66dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qq7cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-sj5dx_openstack(0d8e3d77-6347-49cf-9ffa-335c063b8f12): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:40:10 crc kubenswrapper[4784]: E0123 06:40:10.094887 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-sj5dx" podUID="0d8e3d77-6347-49cf-9ffa-335c063b8f12" Jan 23 06:40:10 crc kubenswrapper[4784]: I0123 06:40:10.150656 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" event={"ID":"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97","Type":"ContainerStarted","Data":"6538cfb7e1af76f403522ed2a2c532f39235b7a440d03582843b843f9943dab9"} Jan 23 06:40:10 crc kubenswrapper[4784]: I0123 06:40:10.151723 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b59b602d-4a20-4b11-8577-d13582d30ce8","Type":"ContainerStarted","Data":"4e461b122df598e2d58103b7f7839e0017799538cb2b7e76551895030d32ab81"} Jan 23 06:40:10 crc kubenswrapper[4784]: E0123 06:40:10.153808 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-sj5dx" podUID="0d8e3d77-6347-49cf-9ffa-335c063b8f12" Jan 23 06:40:11 crc kubenswrapper[4784]: E0123 06:40:11.054070 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 06:40:11 crc kubenswrapper[4784]: E0123 06:40:11.054137 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 06:40:11 crc kubenswrapper[4784]: E0123 06:40:11.054320 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqbn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-k2mm9_openstack(0df1bbce-7b12-4893-b602-871a9de74fab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:40:11 crc kubenswrapper[4784]: E0123 06:40:11.055502 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-k2mm9" podUID="0df1bbce-7b12-4893-b602-871a9de74fab" Jan 23 06:40:11 crc kubenswrapper[4784]: E0123 06:40:11.054594 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-286x6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-jnd56_openstack(a7aff6ba-048f-4e88-b924-58072427ab1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:40:11 crc kubenswrapper[4784]: E0123 06:40:11.058087 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" podUID="a7aff6ba-048f-4e88-b924-58072427ab1e" Jan 23 06:40:11 crc kubenswrapper[4784]: E0123 06:40:11.160950 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" podUID="a7aff6ba-048f-4e88-b924-58072427ab1e" Jan 23 06:40:11 crc kubenswrapper[4784]: E0123 06:40:11.353808 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Jan 23 06:40:11 crc kubenswrapper[4784]: E0123 06:40:11.354518 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f9h59fh54dh56ch59ch557h664h574hf5h547h58h74hb6h88h575hfdh644h565h59h5fbhc6hdfhb4hcdh594h56h566h74h65bh648h5f4h5d5q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4drg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(2317a2c2-318f-46c1-98d0-61c93c840b91): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:40:11 crc kubenswrapper[4784]: E0123 06:40:11.418867 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 06:40:11 crc kubenswrapper[4784]: E0123 06:40:11.419464 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjzld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-fbq7r_openstack(5c97570e-8426-4d44-af59-a556532589c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:40:11 crc kubenswrapper[4784]: E0123 06:40:11.420692 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" podUID="5c97570e-8426-4d44-af59-a556532589c6" Jan 23 06:40:11 crc kubenswrapper[4784]: E0123 06:40:11.492563 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 06:40:11 crc kubenswrapper[4784]: E0123 06:40:11.492818 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmh6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-55djd_openstack(0f18bc06-83da-4643-b856-3b0d700a1af8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:40:11 crc kubenswrapper[4784]: E0123 06:40:11.493946 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-55djd" podUID="0f18bc06-83da-4643-b856-3b0d700a1af8" Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.745768 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-k2mm9" Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.757589 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.773178 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-55djd" Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.944613 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f18bc06-83da-4643-b856-3b0d700a1af8-dns-svc\") pod \"0f18bc06-83da-4643-b856-3b0d700a1af8\" (UID: \"0f18bc06-83da-4643-b856-3b0d700a1af8\") " Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.944814 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c97570e-8426-4d44-af59-a556532589c6-dns-svc\") pod \"5c97570e-8426-4d44-af59-a556532589c6\" (UID: \"5c97570e-8426-4d44-af59-a556532589c6\") " Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.944916 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjzld\" (UniqueName: \"kubernetes.io/projected/5c97570e-8426-4d44-af59-a556532589c6-kube-api-access-pjzld\") pod \"5c97570e-8426-4d44-af59-a556532589c6\" (UID: \"5c97570e-8426-4d44-af59-a556532589c6\") " Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.944980 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmh6h\" (UniqueName: \"kubernetes.io/projected/0f18bc06-83da-4643-b856-3b0d700a1af8-kube-api-access-vmh6h\") pod \"0f18bc06-83da-4643-b856-3b0d700a1af8\" (UID: \"0f18bc06-83da-4643-b856-3b0d700a1af8\") " Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.945018 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c97570e-8426-4d44-af59-a556532589c6-config\") pod \"5c97570e-8426-4d44-af59-a556532589c6\" (UID: \"5c97570e-8426-4d44-af59-a556532589c6\") " Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.945093 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f18bc06-83da-4643-b856-3b0d700a1af8-config\") pod \"0f18bc06-83da-4643-b856-3b0d700a1af8\" (UID: \"0f18bc06-83da-4643-b856-3b0d700a1af8\") " Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.945159 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqbn7\" (UniqueName: \"kubernetes.io/projected/0df1bbce-7b12-4893-b602-871a9de74fab-kube-api-access-pqbn7\") pod \"0df1bbce-7b12-4893-b602-871a9de74fab\" (UID: \"0df1bbce-7b12-4893-b602-871a9de74fab\") " Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.945203 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0df1bbce-7b12-4893-b602-871a9de74fab-config\") pod \"0df1bbce-7b12-4893-b602-871a9de74fab\" (UID: \"0df1bbce-7b12-4893-b602-871a9de74fab\") " Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.945450 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f18bc06-83da-4643-b856-3b0d700a1af8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0f18bc06-83da-4643-b856-3b0d700a1af8" (UID: "0f18bc06-83da-4643-b856-3b0d700a1af8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.945838 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c97570e-8426-4d44-af59-a556532589c6-config" (OuterVolumeSpecName: "config") pod "5c97570e-8426-4d44-af59-a556532589c6" (UID: "5c97570e-8426-4d44-af59-a556532589c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.946042 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0df1bbce-7b12-4893-b602-871a9de74fab-config" (OuterVolumeSpecName: "config") pod "0df1bbce-7b12-4893-b602-871a9de74fab" (UID: "0df1bbce-7b12-4893-b602-871a9de74fab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.946129 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c97570e-8426-4d44-af59-a556532589c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c97570e-8426-4d44-af59-a556532589c6" (UID: "5c97570e-8426-4d44-af59-a556532589c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.946151 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f18bc06-83da-4643-b856-3b0d700a1af8-config" (OuterVolumeSpecName: "config") pod "0f18bc06-83da-4643-b856-3b0d700a1af8" (UID: "0f18bc06-83da-4643-b856-3b0d700a1af8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.952258 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f18bc06-83da-4643-b856-3b0d700a1af8-kube-api-access-vmh6h" (OuterVolumeSpecName: "kube-api-access-vmh6h") pod "0f18bc06-83da-4643-b856-3b0d700a1af8" (UID: "0f18bc06-83da-4643-b856-3b0d700a1af8"). InnerVolumeSpecName "kube-api-access-vmh6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.952330 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df1bbce-7b12-4893-b602-871a9de74fab-kube-api-access-pqbn7" (OuterVolumeSpecName: "kube-api-access-pqbn7") pod "0df1bbce-7b12-4893-b602-871a9de74fab" (UID: "0df1bbce-7b12-4893-b602-871a9de74fab"). InnerVolumeSpecName "kube-api-access-pqbn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:12 crc kubenswrapper[4784]: I0123 06:40:12.954632 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c97570e-8426-4d44-af59-a556532589c6-kube-api-access-pjzld" (OuterVolumeSpecName: "kube-api-access-pjzld") pod "5c97570e-8426-4d44-af59-a556532589c6" (UID: "5c97570e-8426-4d44-af59-a556532589c6"). InnerVolumeSpecName "kube-api-access-pjzld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.047647 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjzld\" (UniqueName: \"kubernetes.io/projected/5c97570e-8426-4d44-af59-a556532589c6-kube-api-access-pjzld\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.047697 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmh6h\" (UniqueName: \"kubernetes.io/projected/0f18bc06-83da-4643-b856-3b0d700a1af8-kube-api-access-vmh6h\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.047719 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c97570e-8426-4d44-af59-a556532589c6-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.047733 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f18bc06-83da-4643-b856-3b0d700a1af8-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.047762 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqbn7\" (UniqueName: \"kubernetes.io/projected/0df1bbce-7b12-4893-b602-871a9de74fab-kube-api-access-pqbn7\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.047776 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0df1bbce-7b12-4893-b602-871a9de74fab-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.047793 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f18bc06-83da-4643-b856-3b0d700a1af8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.047805 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c97570e-8426-4d44-af59-a556532589c6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.183679 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-55djd" event={"ID":"0f18bc06-83da-4643-b856-3b0d700a1af8","Type":"ContainerDied","Data":"4e9b21457aa545efa450ddb2ff28208ae40632a92ce561a34dbff731d47eb72e"} Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.183945 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-55djd" Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.185219 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-k2mm9" event={"ID":"0df1bbce-7b12-4893-b602-871a9de74fab","Type":"ContainerDied","Data":"17cb18848749919d8cc6c83a6c18a34c164eebcc2dcdf4d2f85831859c748cf9"} Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.185416 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-k2mm9" Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.186552 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" event={"ID":"5c97570e-8426-4d44-af59-a556532589c6","Type":"ContainerDied","Data":"7bf28ea31400aa980262b120f625ad46b6354fa0bb418794438b67237236ff93"} Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.186705 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fbq7r" Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.274891 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-55djd"] Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.294831 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-55djd"] Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.313026 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fbq7r"] Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.326959 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fbq7r"] Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.341186 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k2mm9"] Jan 23 06:40:13 crc kubenswrapper[4784]: I0123 06:40:13.347568 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k2mm9"] Jan 23 06:40:13 crc kubenswrapper[4784]: E0123 06:40:13.548116 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 23 06:40:13 crc kubenswrapper[4784]: E0123 06:40:13.548196 4784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 23 06:40:13 crc kubenswrapper[4784]: E0123 06:40:13.549829 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ntprq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(2c542d52-d20d-41d2-8b80-fb2a9bf5bafa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:40:13 crc kubenswrapper[4784]: E0123 06:40:13.551093 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="2c542d52-d20d-41d2-8b80-fb2a9bf5bafa" Jan 23 06:40:14 crc kubenswrapper[4784]: E0123 06:40:14.203189 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="2c542d52-d20d-41d2-8b80-fb2a9bf5bafa" Jan 23 06:40:14 crc kubenswrapper[4784]: E0123 06:40:14.351444 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="2317a2c2-318f-46c1-98d0-61c93c840b91" Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.225036 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b59b602d-4a20-4b11-8577-d13582d30ce8","Type":"ContainerStarted","Data":"5a90abbca0d5f041cf1816fbacffdf3444696dbc07bea7c2632eb240b24753ad"} Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.226017 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b59b602d-4a20-4b11-8577-d13582d30ce8","Type":"ContainerStarted","Data":"b0805b98d3ffe94c74a659ada479079d7a550ec64509b38103d877ba76e79ad9"} Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.230590 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"99f5c7da473bb191e287690718f667aa1ba0bc87b545db802bd06bfff3e98701"} Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.232612 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dhqg4" event={"ID":"63a3df2f-490b-4cac-89f8-bec049380a07","Type":"ContainerStarted","Data":"e6d1577484e0aa42525fb2098004e079d8c84e235b514f871dc9dc15247dfedd"} Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.235853 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e8477c9f-b8db-4b9e-bf60-1a614700e001","Type":"ContainerStarted","Data":"ce8a4b771759b19de172e5ecc504e9448d4cc0d82ad97452924ebda608d81fe2"} Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.236080 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.240603 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"85680fc8-18ee-4984-8bdb-a489d1e71d39","Type":"ContainerStarted","Data":"6d20b20736005fb68d9059bad891458f22eee43154253b7cd66d3fe2366ef861"} Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.244629 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2317a2c2-318f-46c1-98d0-61c93c840b91","Type":"ContainerStarted","Data":"e3adcc7575fed8df6ded15300fa24db3d35ae56515aa781e8533a12cec3d32dd"} Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.246866 4784 generic.go:334] "Generic (PLEG): container finished" podID="3f489a74-d7ce-4b5f-90f9-a1075b8e6b97" containerID="e7d08b9432787000e98db0e797dbc5bcc97fdeb8c3306b28df4cb955001f3582" exitCode=0 Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.246961 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" event={"ID":"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97","Type":"ContainerDied","Data":"e7d08b9432787000e98db0e797dbc5bcc97fdeb8c3306b28df4cb955001f3582"} Jan 23 06:40:15 crc kubenswrapper[4784]: E0123 06:40:15.247462 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="2317a2c2-318f-46c1-98d0-61c93c840b91" Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.266281 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=42.354079861 podStartE2EDuration="46.266255511s" podCreationTimestamp="2026-01-23 06:39:29 +0000 UTC" firstStartedPulling="2026-01-23 06:40:10.084301654 +0000 UTC m=+1213.316809628" lastFinishedPulling="2026-01-23 06:40:13.996477304 +0000 UTC m=+1217.228985278" observedRunningTime="2026-01-23 06:40:15.256634955 +0000 UTC m=+1218.489142929" watchObservedRunningTime="2026-01-23 06:40:15.266255511 +0000 UTC m=+1218.498763505" Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.276250 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0df1bbce-7b12-4893-b602-871a9de74fab" path="/var/lib/kubelet/pods/0df1bbce-7b12-4893-b602-871a9de74fab/volumes" Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.277180 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f18bc06-83da-4643-b856-3b0d700a1af8" path="/var/lib/kubelet/pods/0f18bc06-83da-4643-b856-3b0d700a1af8/volumes" Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.277707 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c97570e-8426-4d44-af59-a556532589c6" path="/var/lib/kubelet/pods/5c97570e-8426-4d44-af59-a556532589c6/volumes" Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.321165 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.612745972 podStartE2EDuration="54.32113883s" podCreationTimestamp="2026-01-23 06:39:21 +0000 UTC" firstStartedPulling="2026-01-23 06:39:22.224742589 +0000 UTC m=+1165.457250563" lastFinishedPulling="2026-01-23 06:40:13.933135437 +0000 UTC m=+1217.165643421" observedRunningTime="2026-01-23 06:40:15.320641488 +0000 UTC m=+1218.553149462" watchObservedRunningTime="2026-01-23 06:40:15.32113883 +0000 UTC m=+1218.553646814" Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.446964 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-dhqg4" podStartSLOduration=20.445991682 podStartE2EDuration="46.446935312s" podCreationTimestamp="2026-01-23 06:39:29 +0000 UTC" firstStartedPulling="2026-01-23 06:39:48.06008612 +0000 UTC m=+1191.292594094" lastFinishedPulling="2026-01-23 06:40:14.06102975 +0000 UTC m=+1217.293537724" observedRunningTime="2026-01-23 06:40:15.404473008 +0000 UTC m=+1218.636980992" watchObservedRunningTime="2026-01-23 06:40:15.446935312 +0000 UTC m=+1218.679443286" Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.837829 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jnd56"] Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.887506 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-4rnwg"] Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.889958 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.895662 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 23 06:40:15 crc kubenswrapper[4784]: I0123 06:40:15.899885 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-4rnwg"] Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.084520 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-4rnwg\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.084579 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw5kz\" (UniqueName: \"kubernetes.io/projected/9ae42b9d-0c35-4fba-8374-23b58223dce3-kube-api-access-xw5kz\") pod \"dnsmasq-dns-86db49b7ff-4rnwg\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.084612 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-4rnwg\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.084683 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-config\") pod \"dnsmasq-dns-86db49b7ff-4rnwg\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.085600 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-4rnwg\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.187204 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-4rnwg\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.187277 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw5kz\" (UniqueName: \"kubernetes.io/projected/9ae42b9d-0c35-4fba-8374-23b58223dce3-kube-api-access-xw5kz\") pod \"dnsmasq-dns-86db49b7ff-4rnwg\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.187316 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-4rnwg\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.187376 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-config\") pod \"dnsmasq-dns-86db49b7ff-4rnwg\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.187401 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-4rnwg\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.188695 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-4rnwg\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.190038 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-4rnwg\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.191402 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-4rnwg\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.192841 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-config\") pod \"dnsmasq-dns-86db49b7ff-4rnwg\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.219485 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw5kz\" (UniqueName: \"kubernetes.io/projected/9ae42b9d-0c35-4fba-8374-23b58223dce3-kube-api-access-xw5kz\") pod \"dnsmasq-dns-86db49b7ff-4rnwg\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.233355 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.258577 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" event={"ID":"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97","Type":"ContainerStarted","Data":"59fc82851f802c4d5e39c4e52510c7c40e5b86f21820d81d8225e0edfd6c2d8c"} Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.258665 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.263378 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.264615 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8f66f97d-f8a6-4316-ba8b-cbbd922a1655","Type":"ContainerStarted","Data":"9f31846a39797e691d8a222527cb574d74610f9266e5fbdf9668f4ac6f6dfa59"} Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.270790 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.270832 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jnd56" event={"ID":"a7aff6ba-048f-4e88-b924-58072427ab1e","Type":"ContainerDied","Data":"87455a13ae19a7c78b46b1d4c39c68c8e3e9202d6b1734a07ed81af6bf233319"} Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.289121 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" podStartSLOduration=43.179465877 podStartE2EDuration="47.28909638s" podCreationTimestamp="2026-01-23 06:39:29 +0000 UTC" firstStartedPulling="2026-01-23 06:40:09.880839203 +0000 UTC m=+1213.113347177" lastFinishedPulling="2026-01-23 06:40:13.990469706 +0000 UTC m=+1217.222977680" observedRunningTime="2026-01-23 06:40:16.280238282 +0000 UTC m=+1219.512746246" watchObservedRunningTime="2026-01-23 06:40:16.28909638 +0000 UTC m=+1219.521604354" Jan 23 06:40:16 crc kubenswrapper[4784]: E0123 06:40:16.328630 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="2317a2c2-318f-46c1-98d0-61c93c840b91" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.364944 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.365105 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.390908 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7aff6ba-048f-4e88-b924-58072427ab1e-dns-svc\") pod \"a7aff6ba-048f-4e88-b924-58072427ab1e\" (UID: \"a7aff6ba-048f-4e88-b924-58072427ab1e\") " Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.391068 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7aff6ba-048f-4e88-b924-58072427ab1e-config\") pod \"a7aff6ba-048f-4e88-b924-58072427ab1e\" (UID: \"a7aff6ba-048f-4e88-b924-58072427ab1e\") " Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.391247 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-286x6\" (UniqueName: \"kubernetes.io/projected/a7aff6ba-048f-4e88-b924-58072427ab1e-kube-api-access-286x6\") pod \"a7aff6ba-048f-4e88-b924-58072427ab1e\" (UID: \"a7aff6ba-048f-4e88-b924-58072427ab1e\") " Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.392951 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7aff6ba-048f-4e88-b924-58072427ab1e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7aff6ba-048f-4e88-b924-58072427ab1e" (UID: "a7aff6ba-048f-4e88-b924-58072427ab1e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.394537 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7aff6ba-048f-4e88-b924-58072427ab1e-config" (OuterVolumeSpecName: "config") pod "a7aff6ba-048f-4e88-b924-58072427ab1e" (UID: "a7aff6ba-048f-4e88-b924-58072427ab1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.494568 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7aff6ba-048f-4e88-b924-58072427ab1e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.495091 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7aff6ba-048f-4e88-b924-58072427ab1e-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.571269 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7aff6ba-048f-4e88-b924-58072427ab1e-kube-api-access-286x6" (OuterVolumeSpecName: "kube-api-access-286x6") pod "a7aff6ba-048f-4e88-b924-58072427ab1e" (UID: "a7aff6ba-048f-4e88-b924-58072427ab1e"). InnerVolumeSpecName "kube-api-access-286x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.596570 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-286x6\" (UniqueName: \"kubernetes.io/projected/a7aff6ba-048f-4e88-b924-58072427ab1e-kube-api-access-286x6\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.805468 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-4rnwg"] Jan 23 06:40:16 crc kubenswrapper[4784]: W0123 06:40:16.875953 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ae42b9d_0c35_4fba_8374_23b58223dce3.slice/crio-b69b287abc54e951587ea6365b311673315f84627941cc57d3439ff90c3088b3 WatchSource:0}: Error finding container b69b287abc54e951587ea6365b311673315f84627941cc57d3439ff90c3088b3: Status 404 returned error can't find the container with id b69b287abc54e951587ea6365b311673315f84627941cc57d3439ff90c3088b3 Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.973796 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jnd56"] Jan 23 06:40:16 crc kubenswrapper[4784]: I0123 06:40:16.993225 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jnd56"] Jan 23 06:40:17 crc kubenswrapper[4784]: I0123 06:40:17.265066 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7aff6ba-048f-4e88-b924-58072427ab1e" path="/var/lib/kubelet/pods/a7aff6ba-048f-4e88-b924-58072427ab1e/volumes" Jan 23 06:40:17 crc kubenswrapper[4784]: I0123 06:40:17.287632 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" event={"ID":"9ae42b9d-0c35-4fba-8374-23b58223dce3","Type":"ContainerStarted","Data":"b69b287abc54e951587ea6365b311673315f84627941cc57d3439ff90c3088b3"} Jan 23 06:40:18 crc kubenswrapper[4784]: I0123 06:40:18.302154 4784 generic.go:334] "Generic (PLEG): container finished" podID="9ae42b9d-0c35-4fba-8374-23b58223dce3" containerID="ad87d74791cd141ead7dd35410455dff29fec5b76b03658e3dc25d8874c63d74" exitCode=0 Jan 23 06:40:18 crc kubenswrapper[4784]: I0123 06:40:18.302230 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" event={"ID":"9ae42b9d-0c35-4fba-8374-23b58223dce3","Type":"ContainerDied","Data":"ad87d74791cd141ead7dd35410455dff29fec5b76b03658e3dc25d8874c63d74"} Jan 23 06:40:18 crc kubenswrapper[4784]: I0123 06:40:18.309385 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"347f59fd-0378-4413-8880-7d7e9fe9a859","Type":"ContainerStarted","Data":"1a9ef858ff7f17c95f4c419002b58ab8c828215592da098030eb43780983b0ac"} Jan 23 06:40:19 crc kubenswrapper[4784]: I0123 06:40:19.319275 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" event={"ID":"9ae42b9d-0c35-4fba-8374-23b58223dce3","Type":"ContainerStarted","Data":"bd58f80595dd95472a3ba58e0fd0beba3731420b0302d09fa3db96d08489fada"} Jan 23 06:40:19 crc kubenswrapper[4784]: I0123 06:40:19.319709 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:19 crc kubenswrapper[4784]: I0123 06:40:19.321912 4784 generic.go:334] "Generic (PLEG): container finished" podID="85680fc8-18ee-4984-8bdb-a489d1e71d39" containerID="6d20b20736005fb68d9059bad891458f22eee43154253b7cd66d3fe2366ef861" exitCode=0 Jan 23 06:40:19 crc kubenswrapper[4784]: I0123 06:40:19.322132 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"85680fc8-18ee-4984-8bdb-a489d1e71d39","Type":"ContainerDied","Data":"6d20b20736005fb68d9059bad891458f22eee43154253b7cd66d3fe2366ef861"} Jan 23 06:40:19 crc kubenswrapper[4784]: I0123 06:40:19.346492 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" podStartSLOduration=4.346455971 podStartE2EDuration="4.346455971s" podCreationTimestamp="2026-01-23 06:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:40:19.34030989 +0000 UTC m=+1222.572817864" watchObservedRunningTime="2026-01-23 06:40:19.346455971 +0000 UTC m=+1222.578963945" Jan 23 06:40:19 crc kubenswrapper[4784]: I0123 06:40:19.423311 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 23 06:40:19 crc kubenswrapper[4784]: I0123 06:40:19.492365 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 23 06:40:20 crc kubenswrapper[4784]: I0123 06:40:20.290908 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:40:20 crc kubenswrapper[4784]: I0123 06:40:20.335490 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e37da8a-e964-4f8b-aacc-2937130e2e7b","Type":"ContainerStarted","Data":"6c523486f92879d29f8c12e1686060624335e68261e81600c144abb26218a886"} Jan 23 06:40:20 crc kubenswrapper[4784]: I0123 06:40:20.340582 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"85680fc8-18ee-4984-8bdb-a489d1e71d39","Type":"ContainerStarted","Data":"a2d2cd1176447ef8f5bac07bffba35340f5a1788b42a709fc6e4b8ef1a895d15"} Jan 23 06:40:20 crc kubenswrapper[4784]: I0123 06:40:20.347779 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k5dcn" event={"ID":"9852b9db-9435-4bdd-a282-7727fd01a651","Type":"ContainerStarted","Data":"2819604ee47fc7428bc4306047c9bb7150447114430c9b308383104ab73d0a43"} Jan 23 06:40:20 crc kubenswrapper[4784]: I0123 06:40:20.350790 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e79eab6-cf02-4c69-99bd-2f3512c809f3","Type":"ContainerStarted","Data":"108ed665071583075faa37237a76c5edf56e95c94290ca4776fc25ebc9dafb9e"} Jan 23 06:40:20 crc kubenswrapper[4784]: I0123 06:40:20.357929 4784 generic.go:334] "Generic (PLEG): container finished" podID="8f66f97d-f8a6-4316-ba8b-cbbd922a1655" containerID="9f31846a39797e691d8a222527cb574d74610f9266e5fbdf9668f4ac6f6dfa59" exitCode=0 Jan 23 06:40:20 crc kubenswrapper[4784]: I0123 06:40:20.359021 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8f66f97d-f8a6-4316-ba8b-cbbd922a1655","Type":"ContainerDied","Data":"9f31846a39797e691d8a222527cb574d74610f9266e5fbdf9668f4ac6f6dfa59"} Jan 23 06:40:20 crc kubenswrapper[4784]: I0123 06:40:20.475008 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.697264797999999 podStartE2EDuration="1m2.474974627s" podCreationTimestamp="2026-01-23 06:39:18 +0000 UTC" firstStartedPulling="2026-01-23 06:39:21.217028792 +0000 UTC m=+1164.449536766" lastFinishedPulling="2026-01-23 06:40:13.994738621 +0000 UTC m=+1217.227246595" observedRunningTime="2026-01-23 06:40:20.467903923 +0000 UTC m=+1223.700411927" watchObservedRunningTime="2026-01-23 06:40:20.474974627 +0000 UTC m=+1223.707482601" Jan 23 06:40:21 crc kubenswrapper[4784]: I0123 06:40:21.369411 4784 generic.go:334] "Generic (PLEG): container finished" podID="9852b9db-9435-4bdd-a282-7727fd01a651" containerID="2819604ee47fc7428bc4306047c9bb7150447114430c9b308383104ab73d0a43" exitCode=0 Jan 23 06:40:21 crc kubenswrapper[4784]: I0123 06:40:21.369517 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k5dcn" event={"ID":"9852b9db-9435-4bdd-a282-7727fd01a651","Type":"ContainerDied","Data":"2819604ee47fc7428bc4306047c9bb7150447114430c9b308383104ab73d0a43"} Jan 23 06:40:21 crc kubenswrapper[4784]: I0123 06:40:21.372465 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8f66f97d-f8a6-4316-ba8b-cbbd922a1655","Type":"ContainerStarted","Data":"c0363d99e3bb3c4ead9da9e5ce06e007b26f523f665ab0f3f7e0b5d740169319"} Jan 23 06:40:21 crc kubenswrapper[4784]: I0123 06:40:21.440303 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371974.414503 podStartE2EDuration="1m2.440272422s" podCreationTimestamp="2026-01-23 06:39:19 +0000 UTC" firstStartedPulling="2026-01-23 06:39:22.78180028 +0000 UTC m=+1166.014308264" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:40:21.440011964 +0000 UTC m=+1224.672519938" watchObservedRunningTime="2026-01-23 06:40:21.440272422 +0000 UTC m=+1224.672780416" Jan 23 06:40:21 crc kubenswrapper[4784]: I0123 06:40:21.591594 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 23 06:40:22 crc kubenswrapper[4784]: I0123 06:40:22.384529 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k5dcn" event={"ID":"9852b9db-9435-4bdd-a282-7727fd01a651","Type":"ContainerStarted","Data":"9d781bb797ee2b7af3c111e89185022ffb373106cfd4d737d98fb7c0b631bbd4"} Jan 23 06:40:22 crc kubenswrapper[4784]: I0123 06:40:22.385044 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k5dcn" event={"ID":"9852b9db-9435-4bdd-a282-7727fd01a651","Type":"ContainerStarted","Data":"ad953e9512ff247d14db0cc901fd16d3bee7a0413ba928b5c7dcd68d856295c1"} Jan 23 06:40:22 crc kubenswrapper[4784]: I0123 06:40:22.385322 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:40:22 crc kubenswrapper[4784]: I0123 06:40:22.412179 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-k5dcn" podStartSLOduration=10.447117395 podStartE2EDuration="56.412150997s" podCreationTimestamp="2026-01-23 06:39:26 +0000 UTC" firstStartedPulling="2026-01-23 06:39:33.822502999 +0000 UTC m=+1177.055010973" lastFinishedPulling="2026-01-23 06:40:19.787536601 +0000 UTC m=+1223.020044575" observedRunningTime="2026-01-23 06:40:22.406262433 +0000 UTC m=+1225.638770427" watchObservedRunningTime="2026-01-23 06:40:22.412150997 +0000 UTC m=+1225.644658971" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.443311 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-4rnwg"] Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.444092 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" podUID="9ae42b9d-0c35-4fba-8374-23b58223dce3" containerName="dnsmasq-dns" containerID="cri-o://bd58f80595dd95472a3ba58e0fd0beba3731420b0302d09fa3db96d08489fada" gracePeriod=10 Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.448079 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.511102 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-wvmgs"] Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.519959 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.531393 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wvmgs"] Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.551951 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.698057 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-dns-svc\") pod \"dnsmasq-dns-698758b865-wvmgs\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.698119 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-287zb\" (UniqueName: \"kubernetes.io/projected/cd1979d0-9c1b-4625-ba5e-20942e12e569-kube-api-access-287zb\") pod \"dnsmasq-dns-698758b865-wvmgs\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.698174 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wvmgs\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.698222 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-config\") pod \"dnsmasq-dns-698758b865-wvmgs\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.698264 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wvmgs\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.800651 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wvmgs\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.801239 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-dns-svc\") pod \"dnsmasq-dns-698758b865-wvmgs\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.801264 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-287zb\" (UniqueName: \"kubernetes.io/projected/cd1979d0-9c1b-4625-ba5e-20942e12e569-kube-api-access-287zb\") pod \"dnsmasq-dns-698758b865-wvmgs\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.801324 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wvmgs\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.801368 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-config\") pod \"dnsmasq-dns-698758b865-wvmgs\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.801912 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wvmgs\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.802165 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-config\") pod \"dnsmasq-dns-698758b865-wvmgs\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.802526 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-dns-svc\") pod \"dnsmasq-dns-698758b865-wvmgs\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.802808 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wvmgs\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.830987 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-287zb\" (UniqueName: \"kubernetes.io/projected/cd1979d0-9c1b-4625-ba5e-20942e12e569-kube-api-access-287zb\") pod \"dnsmasq-dns-698758b865-wvmgs\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:23 crc kubenswrapper[4784]: I0123 06:40:23.862813 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.384848 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wvmgs"] Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.592438 4784 generic.go:334] "Generic (PLEG): container finished" podID="9ae42b9d-0c35-4fba-8374-23b58223dce3" containerID="bd58f80595dd95472a3ba58e0fd0beba3731420b0302d09fa3db96d08489fada" exitCode=0 Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.592868 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" event={"ID":"9ae42b9d-0c35-4fba-8374-23b58223dce3","Type":"ContainerDied","Data":"bd58f80595dd95472a3ba58e0fd0beba3731420b0302d09fa3db96d08489fada"} Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.595837 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wvmgs" event={"ID":"cd1979d0-9c1b-4625-ba5e-20942e12e569","Type":"ContainerStarted","Data":"4626c85196a6f22b28d0794a32a1ee7fadfcf5286d8f2c122cca4d294a406abf"} Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.603601 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.612047 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.648520 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.648830 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.648989 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.649205 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-chj7p" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.667487 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.740426 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcscd\" (UniqueName: \"kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-kube-api-access-hcscd\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.740527 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/abb5c886-7378-4bdd-b56a-cc803db75cbd-cache\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.740551 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb5c886-7378-4bdd-b56a-cc803db75cbd-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.740617 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.740675 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.740692 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/abb5c886-7378-4bdd-b56a-cc803db75cbd-lock\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.746564 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.842484 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-ovsdbserver-nb\") pod \"9ae42b9d-0c35-4fba-8374-23b58223dce3\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.842558 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-config\") pod \"9ae42b9d-0c35-4fba-8374-23b58223dce3\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.842630 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-ovsdbserver-sb\") pod \"9ae42b9d-0c35-4fba-8374-23b58223dce3\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.842817 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw5kz\" (UniqueName: \"kubernetes.io/projected/9ae42b9d-0c35-4fba-8374-23b58223dce3-kube-api-access-xw5kz\") pod \"9ae42b9d-0c35-4fba-8374-23b58223dce3\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.842839 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-dns-svc\") pod \"9ae42b9d-0c35-4fba-8374-23b58223dce3\" (UID: \"9ae42b9d-0c35-4fba-8374-23b58223dce3\") " Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.843185 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.843238 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.843261 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/abb5c886-7378-4bdd-b56a-cc803db75cbd-lock\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.843301 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcscd\" (UniqueName: \"kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-kube-api-access-hcscd\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.843350 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/abb5c886-7378-4bdd-b56a-cc803db75cbd-cache\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.843366 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb5c886-7378-4bdd-b56a-cc803db75cbd-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.844185 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.846457 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/abb5c886-7378-4bdd-b56a-cc803db75cbd-cache\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.846580 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/abb5c886-7378-4bdd-b56a-cc803db75cbd-lock\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: E0123 06:40:24.846692 4784 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 06:40:24 crc kubenswrapper[4784]: E0123 06:40:24.846728 4784 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 06:40:24 crc kubenswrapper[4784]: E0123 06:40:24.847005 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift podName:abb5c886-7378-4bdd-b56a-cc803db75cbd nodeName:}" failed. No retries permitted until 2026-01-23 06:40:25.346965048 +0000 UTC m=+1228.579473022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift") pod "swift-storage-0" (UID: "abb5c886-7378-4bdd-b56a-cc803db75cbd") : configmap "swift-ring-files" not found Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.851724 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb5c886-7378-4bdd-b56a-cc803db75cbd-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.858009 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae42b9d-0c35-4fba-8374-23b58223dce3-kube-api-access-xw5kz" (OuterVolumeSpecName: "kube-api-access-xw5kz") pod "9ae42b9d-0c35-4fba-8374-23b58223dce3" (UID: "9ae42b9d-0c35-4fba-8374-23b58223dce3"). InnerVolumeSpecName "kube-api-access-xw5kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.865599 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcscd\" (UniqueName: \"kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-kube-api-access-hcscd\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.875380 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.891762 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-config" (OuterVolumeSpecName: "config") pod "9ae42b9d-0c35-4fba-8374-23b58223dce3" (UID: "9ae42b9d-0c35-4fba-8374-23b58223dce3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.903743 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ae42b9d-0c35-4fba-8374-23b58223dce3" (UID: "9ae42b9d-0c35-4fba-8374-23b58223dce3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.914583 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ae42b9d-0c35-4fba-8374-23b58223dce3" (UID: "9ae42b9d-0c35-4fba-8374-23b58223dce3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.918415 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ae42b9d-0c35-4fba-8374-23b58223dce3" (UID: "9ae42b9d-0c35-4fba-8374-23b58223dce3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.945832 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw5kz\" (UniqueName: \"kubernetes.io/projected/9ae42b9d-0c35-4fba-8374-23b58223dce3-kube-api-access-xw5kz\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.945886 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.945903 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.945916 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:24 crc kubenswrapper[4784]: I0123 06:40:24.945929 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ae42b9d-0c35-4fba-8374-23b58223dce3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:25 crc kubenswrapper[4784]: I0123 06:40:25.354570 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:25 crc kubenswrapper[4784]: E0123 06:40:25.354949 4784 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 06:40:25 crc kubenswrapper[4784]: E0123 06:40:25.356717 4784 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 06:40:25 crc kubenswrapper[4784]: E0123 06:40:25.356838 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift podName:abb5c886-7378-4bdd-b56a-cc803db75cbd nodeName:}" failed. No retries permitted until 2026-01-23 06:40:26.356811358 +0000 UTC m=+1229.589319422 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift") pod "swift-storage-0" (UID: "abb5c886-7378-4bdd-b56a-cc803db75cbd") : configmap "swift-ring-files" not found Jan 23 06:40:25 crc kubenswrapper[4784]: I0123 06:40:25.608107 4784 generic.go:334] "Generic (PLEG): container finished" podID="cd1979d0-9c1b-4625-ba5e-20942e12e569" containerID="bc48dc4b3dd963d5b237d173e341920e960ec9a8cec18c2764eebcb89441ebf8" exitCode=0 Jan 23 06:40:25 crc kubenswrapper[4784]: I0123 06:40:25.608230 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wvmgs" event={"ID":"cd1979d0-9c1b-4625-ba5e-20942e12e569","Type":"ContainerDied","Data":"bc48dc4b3dd963d5b237d173e341920e960ec9a8cec18c2764eebcb89441ebf8"} Jan 23 06:40:25 crc kubenswrapper[4784]: I0123 06:40:25.612388 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" event={"ID":"9ae42b9d-0c35-4fba-8374-23b58223dce3","Type":"ContainerDied","Data":"b69b287abc54e951587ea6365b311673315f84627941cc57d3439ff90c3088b3"} Jan 23 06:40:25 crc kubenswrapper[4784]: I0123 06:40:25.612463 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-4rnwg" Jan 23 06:40:25 crc kubenswrapper[4784]: I0123 06:40:25.612487 4784 scope.go:117] "RemoveContainer" containerID="bd58f80595dd95472a3ba58e0fd0beba3731420b0302d09fa3db96d08489fada" Jan 23 06:40:25 crc kubenswrapper[4784]: I0123 06:40:25.638923 4784 scope.go:117] "RemoveContainer" containerID="ad87d74791cd141ead7dd35410455dff29fec5b76b03658e3dc25d8874c63d74" Jan 23 06:40:25 crc kubenswrapper[4784]: I0123 06:40:25.686957 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-4rnwg"] Jan 23 06:40:25 crc kubenswrapper[4784]: I0123 06:40:25.705881 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-4rnwg"] Jan 23 06:40:26 crc kubenswrapper[4784]: I0123 06:40:26.379961 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:26 crc kubenswrapper[4784]: E0123 06:40:26.380423 4784 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 06:40:26 crc kubenswrapper[4784]: E0123 06:40:26.381119 4784 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 06:40:26 crc kubenswrapper[4784]: E0123 06:40:26.381170 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift podName:abb5c886-7378-4bdd-b56a-cc803db75cbd nodeName:}" failed. No retries permitted until 2026-01-23 06:40:28.381154233 +0000 UTC m=+1231.613662207 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift") pod "swift-storage-0" (UID: "abb5c886-7378-4bdd-b56a-cc803db75cbd") : configmap "swift-ring-files" not found Jan 23 06:40:26 crc kubenswrapper[4784]: I0123 06:40:26.622734 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wvmgs" event={"ID":"cd1979d0-9c1b-4625-ba5e-20942e12e569","Type":"ContainerStarted","Data":"afc82229e0f8c8f306de6a444605332a7e502c7f53ba0c937c7bcd09c3ed8c63"} Jan 23 06:40:26 crc kubenswrapper[4784]: I0123 06:40:26.623114 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:26 crc kubenswrapper[4784]: I0123 06:40:26.625816 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sj5dx" event={"ID":"0d8e3d77-6347-49cf-9ffa-335c063b8f12","Type":"ContainerStarted","Data":"22c793eafd8742ebeaae40b1a9822857bddba0f033aa3aca80f5213d3152258d"} Jan 23 06:40:26 crc kubenswrapper[4784]: I0123 06:40:26.626048 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sj5dx" Jan 23 06:40:26 crc kubenswrapper[4784]: I0123 06:40:26.646699 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-wvmgs" podStartSLOduration=3.646664929 podStartE2EDuration="3.646664929s" podCreationTimestamp="2026-01-23 06:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:40:26.646174137 +0000 UTC m=+1229.878682121" watchObservedRunningTime="2026-01-23 06:40:26.646664929 +0000 UTC m=+1229.879172903" Jan 23 06:40:26 crc kubenswrapper[4784]: I0123 06:40:26.667234 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sj5dx" podStartSLOduration=8.646575163 podStartE2EDuration="1m0.667202234s" podCreationTimestamp="2026-01-23 06:39:26 +0000 UTC" firstStartedPulling="2026-01-23 06:39:33.820723176 +0000 UTC m=+1177.053231150" lastFinishedPulling="2026-01-23 06:40:25.841350247 +0000 UTC m=+1229.073858221" observedRunningTime="2026-01-23 06:40:26.666199669 +0000 UTC m=+1229.898707653" watchObservedRunningTime="2026-01-23 06:40:26.667202234 +0000 UTC m=+1229.899710218" Jan 23 06:40:27 crc kubenswrapper[4784]: E0123 06:40:27.048878 4784 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod347f59fd_0378_4413_8880_7d7e9fe9a859.slice/crio-conmon-1a9ef858ff7f17c95f4c419002b58ab8c828215592da098030eb43780983b0ac.scope\": RecentStats: unable to find data in memory cache]" Jan 23 06:40:27 crc kubenswrapper[4784]: I0123 06:40:27.265520 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae42b9d-0c35-4fba-8374-23b58223dce3" path="/var/lib/kubelet/pods/9ae42b9d-0c35-4fba-8374-23b58223dce3/volumes" Jan 23 06:40:27 crc kubenswrapper[4784]: I0123 06:40:27.639087 4784 generic.go:334] "Generic (PLEG): container finished" podID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerID="1a9ef858ff7f17c95f4c419002b58ab8c828215592da098030eb43780983b0ac" exitCode=0 Jan 23 06:40:27 crc kubenswrapper[4784]: I0123 06:40:27.639197 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"347f59fd-0378-4413-8880-7d7e9fe9a859","Type":"ContainerDied","Data":"1a9ef858ff7f17c95f4c419002b58ab8c828215592da098030eb43780983b0ac"} Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.434072 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:28 crc kubenswrapper[4784]: E0123 06:40:28.434339 4784 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 06:40:28 crc kubenswrapper[4784]: E0123 06:40:28.434379 4784 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 06:40:28 crc kubenswrapper[4784]: E0123 06:40:28.434466 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift podName:abb5c886-7378-4bdd-b56a-cc803db75cbd nodeName:}" failed. No retries permitted until 2026-01-23 06:40:32.434441407 +0000 UTC m=+1235.666949381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift") pod "swift-storage-0" (UID: "abb5c886-7378-4bdd-b56a-cc803db75cbd") : configmap "swift-ring-files" not found Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.617132 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dt95t"] Jan 23 06:40:28 crc kubenswrapper[4784]: E0123 06:40:28.617573 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae42b9d-0c35-4fba-8374-23b58223dce3" containerName="init" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.617592 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae42b9d-0c35-4fba-8374-23b58223dce3" containerName="init" Jan 23 06:40:28 crc kubenswrapper[4784]: E0123 06:40:28.617651 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae42b9d-0c35-4fba-8374-23b58223dce3" containerName="dnsmasq-dns" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.617658 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae42b9d-0c35-4fba-8374-23b58223dce3" containerName="dnsmasq-dns" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.617849 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae42b9d-0c35-4fba-8374-23b58223dce3" containerName="dnsmasq-dns" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.618560 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.621870 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.622086 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.622129 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.667864 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-dt95t"] Jan 23 06:40:28 crc kubenswrapper[4784]: E0123 06:40:28.668996 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-lrr7k ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-lrr7k ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-dt95t" podUID="a46ce459-66bb-449d-8797-8351d31363a4" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.679611 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-v8cqj"] Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.681094 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.689820 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v8cqj"] Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.697151 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-dt95t"] Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.739979 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/008ddd6f-ae82-41ee-a0d7-ad63e2880889-etc-swift\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.740026 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-dispersionconf\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.740054 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-swiftconf\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.740263 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-swiftconf\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.740383 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-combined-ca-bundle\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.740421 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/008ddd6f-ae82-41ee-a0d7-ad63e2880889-ring-data-devices\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.740563 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a46ce459-66bb-449d-8797-8351d31363a4-etc-swift\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.740602 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a46ce459-66bb-449d-8797-8351d31363a4-scripts\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.740647 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrr7k\" (UniqueName: \"kubernetes.io/projected/a46ce459-66bb-449d-8797-8351d31363a4-kube-api-access-lrr7k\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.740719 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4ztd\" (UniqueName: \"kubernetes.io/projected/008ddd6f-ae82-41ee-a0d7-ad63e2880889-kube-api-access-r4ztd\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.740791 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/008ddd6f-ae82-41ee-a0d7-ad63e2880889-scripts\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.740846 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a46ce459-66bb-449d-8797-8351d31363a4-ring-data-devices\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.740917 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-combined-ca-bundle\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.740995 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-dispersionconf\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.842869 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4ztd\" (UniqueName: \"kubernetes.io/projected/008ddd6f-ae82-41ee-a0d7-ad63e2880889-kube-api-access-r4ztd\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.842930 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/008ddd6f-ae82-41ee-a0d7-ad63e2880889-scripts\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.842966 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a46ce459-66bb-449d-8797-8351d31363a4-ring-data-devices\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.843003 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-combined-ca-bundle\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.843056 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-dispersionconf\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.843078 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-dispersionconf\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.843097 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/008ddd6f-ae82-41ee-a0d7-ad63e2880889-etc-swift\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.843117 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-swiftconf\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.843160 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-swiftconf\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.843191 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-combined-ca-bundle\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.843210 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/008ddd6f-ae82-41ee-a0d7-ad63e2880889-ring-data-devices\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.843246 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a46ce459-66bb-449d-8797-8351d31363a4-etc-swift\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.843267 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a46ce459-66bb-449d-8797-8351d31363a4-scripts\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.843291 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrr7k\" (UniqueName: \"kubernetes.io/projected/a46ce459-66bb-449d-8797-8351d31363a4-kube-api-access-lrr7k\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.843904 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/008ddd6f-ae82-41ee-a0d7-ad63e2880889-etc-swift\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.844215 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/008ddd6f-ae82-41ee-a0d7-ad63e2880889-scripts\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.844296 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a46ce459-66bb-449d-8797-8351d31363a4-ring-data-devices\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.844393 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/008ddd6f-ae82-41ee-a0d7-ad63e2880889-ring-data-devices\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.846216 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a46ce459-66bb-449d-8797-8351d31363a4-scripts\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.844509 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a46ce459-66bb-449d-8797-8351d31363a4-etc-swift\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.851519 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-swiftconf\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.851512 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-dispersionconf\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.851533 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-dispersionconf\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.851531 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-swiftconf\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.851703 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-combined-ca-bundle\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.853247 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-combined-ca-bundle\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.862393 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4ztd\" (UniqueName: \"kubernetes.io/projected/008ddd6f-ae82-41ee-a0d7-ad63e2880889-kube-api-access-r4ztd\") pod \"swift-ring-rebalance-v8cqj\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:28 crc kubenswrapper[4784]: I0123 06:40:28.866203 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrr7k\" (UniqueName: \"kubernetes.io/projected/a46ce459-66bb-449d-8797-8351d31363a4-kube-api-access-lrr7k\") pod \"swift-ring-rebalance-dt95t\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:28.999847 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.529086 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v8cqj"] Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.657674 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v8cqj" event={"ID":"008ddd6f-ae82-41ee-a0d7-ad63e2880889","Type":"ContainerStarted","Data":"b8e83cdb7d5317e0ed28303f955d86e9f090b40de76a81b472649a3e476f5d01"} Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.657740 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.674108 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.764261 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-swiftconf\") pod \"a46ce459-66bb-449d-8797-8351d31363a4\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.764424 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a46ce459-66bb-449d-8797-8351d31363a4-etc-swift\") pod \"a46ce459-66bb-449d-8797-8351d31363a4\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.764540 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-dispersionconf\") pod \"a46ce459-66bb-449d-8797-8351d31363a4\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.764638 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrr7k\" (UniqueName: \"kubernetes.io/projected/a46ce459-66bb-449d-8797-8351d31363a4-kube-api-access-lrr7k\") pod \"a46ce459-66bb-449d-8797-8351d31363a4\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.764680 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a46ce459-66bb-449d-8797-8351d31363a4-scripts\") pod \"a46ce459-66bb-449d-8797-8351d31363a4\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.764821 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-combined-ca-bundle\") pod \"a46ce459-66bb-449d-8797-8351d31363a4\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.764862 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a46ce459-66bb-449d-8797-8351d31363a4-ring-data-devices\") pod \"a46ce459-66bb-449d-8797-8351d31363a4\" (UID: \"a46ce459-66bb-449d-8797-8351d31363a4\") " Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.765093 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a46ce459-66bb-449d-8797-8351d31363a4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a46ce459-66bb-449d-8797-8351d31363a4" (UID: "a46ce459-66bb-449d-8797-8351d31363a4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.765811 4784 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a46ce459-66bb-449d-8797-8351d31363a4-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.766593 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46ce459-66bb-449d-8797-8351d31363a4-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a46ce459-66bb-449d-8797-8351d31363a4" (UID: "a46ce459-66bb-449d-8797-8351d31363a4"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.767079 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a46ce459-66bb-449d-8797-8351d31363a4-scripts" (OuterVolumeSpecName: "scripts") pod "a46ce459-66bb-449d-8797-8351d31363a4" (UID: "a46ce459-66bb-449d-8797-8351d31363a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.774137 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a46ce459-66bb-449d-8797-8351d31363a4" (UID: "a46ce459-66bb-449d-8797-8351d31363a4"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.775655 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a46ce459-66bb-449d-8797-8351d31363a4" (UID: "a46ce459-66bb-449d-8797-8351d31363a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.776090 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a46ce459-66bb-449d-8797-8351d31363a4-kube-api-access-lrr7k" (OuterVolumeSpecName: "kube-api-access-lrr7k") pod "a46ce459-66bb-449d-8797-8351d31363a4" (UID: "a46ce459-66bb-449d-8797-8351d31363a4"). InnerVolumeSpecName "kube-api-access-lrr7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.776826 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a46ce459-66bb-449d-8797-8351d31363a4" (UID: "a46ce459-66bb-449d-8797-8351d31363a4"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.856299 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.856823 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.867072 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrr7k\" (UniqueName: \"kubernetes.io/projected/a46ce459-66bb-449d-8797-8351d31363a4-kube-api-access-lrr7k\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.867104 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a46ce459-66bb-449d-8797-8351d31363a4-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.867115 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.867127 4784 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a46ce459-66bb-449d-8797-8351d31363a4-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.867136 4784 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.867146 4784 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a46ce459-66bb-449d-8797-8351d31363a4-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:29 crc kubenswrapper[4784]: I0123 06:40:29.942962 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 23 06:40:30 crc kubenswrapper[4784]: I0123 06:40:30.670687 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2c542d52-d20d-41d2-8b80-fb2a9bf5bafa","Type":"ContainerStarted","Data":"19aee540d361654a29a3fe2e73d6a71083ea09edff738b6a6503e1690dea7972"} Jan 23 06:40:30 crc kubenswrapper[4784]: I0123 06:40:30.671550 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 06:40:30 crc kubenswrapper[4784]: I0123 06:40:30.671600 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dt95t" Jan 23 06:40:30 crc kubenswrapper[4784]: I0123 06:40:30.696247 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.8940924949999998 podStartE2EDuration="1m7.696221652s" podCreationTimestamp="2026-01-23 06:39:23 +0000 UTC" firstStartedPulling="2026-01-23 06:39:24.8402301 +0000 UTC m=+1168.072738074" lastFinishedPulling="2026-01-23 06:40:29.642359257 +0000 UTC m=+1232.874867231" observedRunningTime="2026-01-23 06:40:30.689906838 +0000 UTC m=+1233.922414822" watchObservedRunningTime="2026-01-23 06:40:30.696221652 +0000 UTC m=+1233.928729626" Jan 23 06:40:30 crc kubenswrapper[4784]: I0123 06:40:30.741343 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-dt95t"] Jan 23 06:40:30 crc kubenswrapper[4784]: I0123 06:40:30.749019 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-dt95t"] Jan 23 06:40:30 crc kubenswrapper[4784]: I0123 06:40:30.773569 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.269936 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a46ce459-66bb-449d-8797-8351d31363a4" path="/var/lib/kubelet/pods/a46ce459-66bb-449d-8797-8351d31363a4/volumes" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.303296 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.304045 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.346437 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-zbpdg"] Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.348351 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zbpdg" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.362018 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-44aa-account-create-update-tc767"] Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.364435 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-44aa-account-create-update-tc767" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.368899 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.395802 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zbpdg"] Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.404636 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56f31b47-1781-4d5a-b7ee-13ec522694d8-operator-scripts\") pod \"keystone-db-create-zbpdg\" (UID: \"56f31b47-1781-4d5a-b7ee-13ec522694d8\") " pod="openstack/keystone-db-create-zbpdg" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.404778 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvj7z\" (UniqueName: \"kubernetes.io/projected/456b7f3f-ca26-4bf9-944f-fb93921474fd-kube-api-access-cvj7z\") pod \"keystone-44aa-account-create-update-tc767\" (UID: \"456b7f3f-ca26-4bf9-944f-fb93921474fd\") " pod="openstack/keystone-44aa-account-create-update-tc767" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.404847 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/456b7f3f-ca26-4bf9-944f-fb93921474fd-operator-scripts\") pod \"keystone-44aa-account-create-update-tc767\" (UID: \"456b7f3f-ca26-4bf9-944f-fb93921474fd\") " pod="openstack/keystone-44aa-account-create-update-tc767" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.404970 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kws5r\" (UniqueName: \"kubernetes.io/projected/56f31b47-1781-4d5a-b7ee-13ec522694d8-kube-api-access-kws5r\") pod \"keystone-db-create-zbpdg\" (UID: \"56f31b47-1781-4d5a-b7ee-13ec522694d8\") " pod="openstack/keystone-db-create-zbpdg" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.406208 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-44aa-account-create-update-tc767"] Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.463844 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.508230 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56f31b47-1781-4d5a-b7ee-13ec522694d8-operator-scripts\") pod \"keystone-db-create-zbpdg\" (UID: \"56f31b47-1781-4d5a-b7ee-13ec522694d8\") " pod="openstack/keystone-db-create-zbpdg" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.508423 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvj7z\" (UniqueName: \"kubernetes.io/projected/456b7f3f-ca26-4bf9-944f-fb93921474fd-kube-api-access-cvj7z\") pod \"keystone-44aa-account-create-update-tc767\" (UID: \"456b7f3f-ca26-4bf9-944f-fb93921474fd\") " pod="openstack/keystone-44aa-account-create-update-tc767" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.508521 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/456b7f3f-ca26-4bf9-944f-fb93921474fd-operator-scripts\") pod \"keystone-44aa-account-create-update-tc767\" (UID: \"456b7f3f-ca26-4bf9-944f-fb93921474fd\") " pod="openstack/keystone-44aa-account-create-update-tc767" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.508625 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kws5r\" (UniqueName: \"kubernetes.io/projected/56f31b47-1781-4d5a-b7ee-13ec522694d8-kube-api-access-kws5r\") pod \"keystone-db-create-zbpdg\" (UID: \"56f31b47-1781-4d5a-b7ee-13ec522694d8\") " pod="openstack/keystone-db-create-zbpdg" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.510520 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/456b7f3f-ca26-4bf9-944f-fb93921474fd-operator-scripts\") pod \"keystone-44aa-account-create-update-tc767\" (UID: \"456b7f3f-ca26-4bf9-944f-fb93921474fd\") " pod="openstack/keystone-44aa-account-create-update-tc767" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.510983 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56f31b47-1781-4d5a-b7ee-13ec522694d8-operator-scripts\") pod \"keystone-db-create-zbpdg\" (UID: \"56f31b47-1781-4d5a-b7ee-13ec522694d8\") " pod="openstack/keystone-db-create-zbpdg" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.539407 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-c6jcv"] Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.540493 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvj7z\" (UniqueName: \"kubernetes.io/projected/456b7f3f-ca26-4bf9-944f-fb93921474fd-kube-api-access-cvj7z\") pod \"keystone-44aa-account-create-update-tc767\" (UID: \"456b7f3f-ca26-4bf9-944f-fb93921474fd\") " pod="openstack/keystone-44aa-account-create-update-tc767" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.540947 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c6jcv" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.545683 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kws5r\" (UniqueName: \"kubernetes.io/projected/56f31b47-1781-4d5a-b7ee-13ec522694d8-kube-api-access-kws5r\") pod \"keystone-db-create-zbpdg\" (UID: \"56f31b47-1781-4d5a-b7ee-13ec522694d8\") " pod="openstack/keystone-db-create-zbpdg" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.577842 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c6jcv"] Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.616992 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7mmg\" (UniqueName: \"kubernetes.io/projected/f8f92e52-4089-4f9a-90bc-a606d37b058d-kube-api-access-r7mmg\") pod \"placement-db-create-c6jcv\" (UID: \"f8f92e52-4089-4f9a-90bc-a606d37b058d\") " pod="openstack/placement-db-create-c6jcv" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.617430 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8f92e52-4089-4f9a-90bc-a606d37b058d-operator-scripts\") pod \"placement-db-create-c6jcv\" (UID: \"f8f92e52-4089-4f9a-90bc-a606d37b058d\") " pod="openstack/placement-db-create-c6jcv" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.651336 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e86c-account-create-update-6rl54"] Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.653107 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e86c-account-create-update-6rl54" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.655691 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.662645 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e86c-account-create-update-6rl54"] Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.678884 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zbpdg" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.691822 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-44aa-account-create-update-tc767" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.719351 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7mmg\" (UniqueName: \"kubernetes.io/projected/f8f92e52-4089-4f9a-90bc-a606d37b058d-kube-api-access-r7mmg\") pod \"placement-db-create-c6jcv\" (UID: \"f8f92e52-4089-4f9a-90bc-a606d37b058d\") " pod="openstack/placement-db-create-c6jcv" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.719493 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqcpj\" (UniqueName: \"kubernetes.io/projected/398711da-15cd-410f-8a7f-8ba41455e438-kube-api-access-cqcpj\") pod \"placement-e86c-account-create-update-6rl54\" (UID: \"398711da-15cd-410f-8a7f-8ba41455e438\") " pod="openstack/placement-e86c-account-create-update-6rl54" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.720177 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398711da-15cd-410f-8a7f-8ba41455e438-operator-scripts\") pod \"placement-e86c-account-create-update-6rl54\" (UID: \"398711da-15cd-410f-8a7f-8ba41455e438\") " pod="openstack/placement-e86c-account-create-update-6rl54" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.720584 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8f92e52-4089-4f9a-90bc-a606d37b058d-operator-scripts\") pod \"placement-db-create-c6jcv\" (UID: \"f8f92e52-4089-4f9a-90bc-a606d37b058d\") " pod="openstack/placement-db-create-c6jcv" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.721293 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8f92e52-4089-4f9a-90bc-a606d37b058d-operator-scripts\") pod \"placement-db-create-c6jcv\" (UID: \"f8f92e52-4089-4f9a-90bc-a606d37b058d\") " pod="openstack/placement-db-create-c6jcv" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.742055 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7mmg\" (UniqueName: \"kubernetes.io/projected/f8f92e52-4089-4f9a-90bc-a606d37b058d-kube-api-access-r7mmg\") pod \"placement-db-create-c6jcv\" (UID: \"f8f92e52-4089-4f9a-90bc-a606d37b058d\") " pod="openstack/placement-db-create-c6jcv" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.787209 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-8fqbr"] Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.791350 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8fqbr" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.796390 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8fqbr"] Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.823320 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqcpj\" (UniqueName: \"kubernetes.io/projected/398711da-15cd-410f-8a7f-8ba41455e438-kube-api-access-cqcpj\") pod \"placement-e86c-account-create-update-6rl54\" (UID: \"398711da-15cd-410f-8a7f-8ba41455e438\") " pod="openstack/placement-e86c-account-create-update-6rl54" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.823388 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398711da-15cd-410f-8a7f-8ba41455e438-operator-scripts\") pod \"placement-e86c-account-create-update-6rl54\" (UID: \"398711da-15cd-410f-8a7f-8ba41455e438\") " pod="openstack/placement-e86c-account-create-update-6rl54" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.826228 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398711da-15cd-410f-8a7f-8ba41455e438-operator-scripts\") pod \"placement-e86c-account-create-update-6rl54\" (UID: \"398711da-15cd-410f-8a7f-8ba41455e438\") " pod="openstack/placement-e86c-account-create-update-6rl54" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.829677 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.847228 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqcpj\" (UniqueName: \"kubernetes.io/projected/398711da-15cd-410f-8a7f-8ba41455e438-kube-api-access-cqcpj\") pod \"placement-e86c-account-create-update-6rl54\" (UID: \"398711da-15cd-410f-8a7f-8ba41455e438\") " pod="openstack/placement-e86c-account-create-update-6rl54" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.932412 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a02b591-5a08-4a50-a248-9d6fb8c9e13e-operator-scripts\") pod \"glance-db-create-8fqbr\" (UID: \"0a02b591-5a08-4a50-a248-9d6fb8c9e13e\") " pod="openstack/glance-db-create-8fqbr" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.932680 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvpmm\" (UniqueName: \"kubernetes.io/projected/0a02b591-5a08-4a50-a248-9d6fb8c9e13e-kube-api-access-vvpmm\") pod \"glance-db-create-8fqbr\" (UID: \"0a02b591-5a08-4a50-a248-9d6fb8c9e13e\") " pod="openstack/glance-db-create-8fqbr" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.967021 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-06bd-account-create-update-pn6qb"] Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.968994 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06bd-account-create-update-pn6qb" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.982141 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c6jcv" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.982395 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.987477 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-06bd-account-create-update-pn6qb"] Jan 23 06:40:31 crc kubenswrapper[4784]: I0123 06:40:31.992805 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e86c-account-create-update-6rl54" Jan 23 06:40:32 crc kubenswrapper[4784]: I0123 06:40:32.035057 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvpmm\" (UniqueName: \"kubernetes.io/projected/0a02b591-5a08-4a50-a248-9d6fb8c9e13e-kube-api-access-vvpmm\") pod \"glance-db-create-8fqbr\" (UID: \"0a02b591-5a08-4a50-a248-9d6fb8c9e13e\") " pod="openstack/glance-db-create-8fqbr" Jan 23 06:40:32 crc kubenswrapper[4784]: I0123 06:40:32.035279 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a02b591-5a08-4a50-a248-9d6fb8c9e13e-operator-scripts\") pod \"glance-db-create-8fqbr\" (UID: \"0a02b591-5a08-4a50-a248-9d6fb8c9e13e\") " pod="openstack/glance-db-create-8fqbr" Jan 23 06:40:32 crc kubenswrapper[4784]: I0123 06:40:32.036352 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a02b591-5a08-4a50-a248-9d6fb8c9e13e-operator-scripts\") pod \"glance-db-create-8fqbr\" (UID: \"0a02b591-5a08-4a50-a248-9d6fb8c9e13e\") " pod="openstack/glance-db-create-8fqbr" Jan 23 06:40:32 crc kubenswrapper[4784]: I0123 06:40:32.083562 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvpmm\" (UniqueName: \"kubernetes.io/projected/0a02b591-5a08-4a50-a248-9d6fb8c9e13e-kube-api-access-vvpmm\") pod \"glance-db-create-8fqbr\" (UID: \"0a02b591-5a08-4a50-a248-9d6fb8c9e13e\") " pod="openstack/glance-db-create-8fqbr" Jan 23 06:40:32 crc kubenswrapper[4784]: I0123 06:40:32.118847 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8fqbr" Jan 23 06:40:32 crc kubenswrapper[4784]: I0123 06:40:32.153617 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcxcz\" (UniqueName: \"kubernetes.io/projected/d3922ff9-5f68-4ef0-8a15-d0b4b566e78b-kube-api-access-qcxcz\") pod \"glance-06bd-account-create-update-pn6qb\" (UID: \"d3922ff9-5f68-4ef0-8a15-d0b4b566e78b\") " pod="openstack/glance-06bd-account-create-update-pn6qb" Jan 23 06:40:32 crc kubenswrapper[4784]: I0123 06:40:32.153733 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3922ff9-5f68-4ef0-8a15-d0b4b566e78b-operator-scripts\") pod \"glance-06bd-account-create-update-pn6qb\" (UID: \"d3922ff9-5f68-4ef0-8a15-d0b4b566e78b\") " pod="openstack/glance-06bd-account-create-update-pn6qb" Jan 23 06:40:32 crc kubenswrapper[4784]: I0123 06:40:32.255989 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxcz\" (UniqueName: \"kubernetes.io/projected/d3922ff9-5f68-4ef0-8a15-d0b4b566e78b-kube-api-access-qcxcz\") pod \"glance-06bd-account-create-update-pn6qb\" (UID: \"d3922ff9-5f68-4ef0-8a15-d0b4b566e78b\") " pod="openstack/glance-06bd-account-create-update-pn6qb" Jan 23 06:40:32 crc kubenswrapper[4784]: I0123 06:40:32.256054 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3922ff9-5f68-4ef0-8a15-d0b4b566e78b-operator-scripts\") pod \"glance-06bd-account-create-update-pn6qb\" (UID: \"d3922ff9-5f68-4ef0-8a15-d0b4b566e78b\") " pod="openstack/glance-06bd-account-create-update-pn6qb" Jan 23 06:40:32 crc kubenswrapper[4784]: I0123 06:40:32.256964 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3922ff9-5f68-4ef0-8a15-d0b4b566e78b-operator-scripts\") pod \"glance-06bd-account-create-update-pn6qb\" (UID: \"d3922ff9-5f68-4ef0-8a15-d0b4b566e78b\") " pod="openstack/glance-06bd-account-create-update-pn6qb" Jan 23 06:40:32 crc kubenswrapper[4784]: I0123 06:40:32.307191 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcxcz\" (UniqueName: \"kubernetes.io/projected/d3922ff9-5f68-4ef0-8a15-d0b4b566e78b-kube-api-access-qcxcz\") pod \"glance-06bd-account-create-update-pn6qb\" (UID: \"d3922ff9-5f68-4ef0-8a15-d0b4b566e78b\") " pod="openstack/glance-06bd-account-create-update-pn6qb" Jan 23 06:40:32 crc kubenswrapper[4784]: I0123 06:40:32.461129 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:32 crc kubenswrapper[4784]: E0123 06:40:32.461337 4784 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 06:40:32 crc kubenswrapper[4784]: E0123 06:40:32.461361 4784 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 06:40:32 crc kubenswrapper[4784]: E0123 06:40:32.461465 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift podName:abb5c886-7378-4bdd-b56a-cc803db75cbd nodeName:}" failed. No retries permitted until 2026-01-23 06:40:40.461443634 +0000 UTC m=+1243.693951598 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift") pod "swift-storage-0" (UID: "abb5c886-7378-4bdd-b56a-cc803db75cbd") : configmap "swift-ring-files" not found Jan 23 06:40:32 crc kubenswrapper[4784]: I0123 06:40:32.602658 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06bd-account-create-update-pn6qb" Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.611318 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-dqfph"] Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.613378 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-dqfph" Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.627150 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-dqfph"] Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.760657 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-d401-account-create-update-vp8qw"] Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.762123 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-d401-account-create-update-vp8qw" Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.769047 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.782795 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-d401-account-create-update-vp8qw"] Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.790616 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab2b3705-b4ae-41bc-961c-b249f979ce40-operator-scripts\") pod \"watcher-db-create-dqfph\" (UID: \"ab2b3705-b4ae-41bc-961c-b249f979ce40\") " pod="openstack/watcher-db-create-dqfph" Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.791072 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72knl\" (UniqueName: \"kubernetes.io/projected/ab2b3705-b4ae-41bc-961c-b249f979ce40-kube-api-access-72knl\") pod \"watcher-db-create-dqfph\" (UID: \"ab2b3705-b4ae-41bc-961c-b249f979ce40\") " pod="openstack/watcher-db-create-dqfph" Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.865653 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.893960 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78p2z\" (UniqueName: \"kubernetes.io/projected/f878f255-96b1-4ac5-89ab-6890e1ada898-kube-api-access-78p2z\") pod \"watcher-d401-account-create-update-vp8qw\" (UID: \"f878f255-96b1-4ac5-89ab-6890e1ada898\") " pod="openstack/watcher-d401-account-create-update-vp8qw" Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.894426 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f878f255-96b1-4ac5-89ab-6890e1ada898-operator-scripts\") pod \"watcher-d401-account-create-update-vp8qw\" (UID: \"f878f255-96b1-4ac5-89ab-6890e1ada898\") " pod="openstack/watcher-d401-account-create-update-vp8qw" Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.894547 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72knl\" (UniqueName: \"kubernetes.io/projected/ab2b3705-b4ae-41bc-961c-b249f979ce40-kube-api-access-72knl\") pod \"watcher-db-create-dqfph\" (UID: \"ab2b3705-b4ae-41bc-961c-b249f979ce40\") " pod="openstack/watcher-db-create-dqfph" Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.894615 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab2b3705-b4ae-41bc-961c-b249f979ce40-operator-scripts\") pod \"watcher-db-create-dqfph\" (UID: \"ab2b3705-b4ae-41bc-961c-b249f979ce40\") " pod="openstack/watcher-db-create-dqfph" Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.895882 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab2b3705-b4ae-41bc-961c-b249f979ce40-operator-scripts\") pod \"watcher-db-create-dqfph\" (UID: \"ab2b3705-b4ae-41bc-961c-b249f979ce40\") " pod="openstack/watcher-db-create-dqfph" Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.928060 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72knl\" (UniqueName: \"kubernetes.io/projected/ab2b3705-b4ae-41bc-961c-b249f979ce40-kube-api-access-72knl\") pod \"watcher-db-create-dqfph\" (UID: \"ab2b3705-b4ae-41bc-961c-b249f979ce40\") " pod="openstack/watcher-db-create-dqfph" Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.932169 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rgshg"] Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.932487 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" podUID="3f489a74-d7ce-4b5f-90f9-a1075b8e6b97" containerName="dnsmasq-dns" containerID="cri-o://59fc82851f802c4d5e39c4e52510c7c40e5b86f21820d81d8225e0edfd6c2d8c" gracePeriod=10 Jan 23 06:40:33 crc kubenswrapper[4784]: I0123 06:40:33.943901 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-dqfph" Jan 23 06:40:34 crc kubenswrapper[4784]: I0123 06:40:34.002693 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78p2z\" (UniqueName: \"kubernetes.io/projected/f878f255-96b1-4ac5-89ab-6890e1ada898-kube-api-access-78p2z\") pod \"watcher-d401-account-create-update-vp8qw\" (UID: \"f878f255-96b1-4ac5-89ab-6890e1ada898\") " pod="openstack/watcher-d401-account-create-update-vp8qw" Jan 23 06:40:34 crc kubenswrapper[4784]: I0123 06:40:34.003474 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f878f255-96b1-4ac5-89ab-6890e1ada898-operator-scripts\") pod \"watcher-d401-account-create-update-vp8qw\" (UID: \"f878f255-96b1-4ac5-89ab-6890e1ada898\") " pod="openstack/watcher-d401-account-create-update-vp8qw" Jan 23 06:40:34 crc kubenswrapper[4784]: I0123 06:40:34.007325 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f878f255-96b1-4ac5-89ab-6890e1ada898-operator-scripts\") pod \"watcher-d401-account-create-update-vp8qw\" (UID: \"f878f255-96b1-4ac5-89ab-6890e1ada898\") " pod="openstack/watcher-d401-account-create-update-vp8qw" Jan 23 06:40:34 crc kubenswrapper[4784]: I0123 06:40:34.036897 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78p2z\" (UniqueName: \"kubernetes.io/projected/f878f255-96b1-4ac5-89ab-6890e1ada898-kube-api-access-78p2z\") pod \"watcher-d401-account-create-update-vp8qw\" (UID: \"f878f255-96b1-4ac5-89ab-6890e1ada898\") " pod="openstack/watcher-d401-account-create-update-vp8qw" Jan 23 06:40:34 crc kubenswrapper[4784]: I0123 06:40:34.090571 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-d401-account-create-update-vp8qw" Jan 23 06:40:34 crc kubenswrapper[4784]: I0123 06:40:34.726629 4784 generic.go:334] "Generic (PLEG): container finished" podID="3f489a74-d7ce-4b5f-90f9-a1075b8e6b97" containerID="59fc82851f802c4d5e39c4e52510c7c40e5b86f21820d81d8225e0edfd6c2d8c" exitCode=0 Jan 23 06:40:34 crc kubenswrapper[4784]: I0123 06:40:34.726684 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" event={"ID":"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97","Type":"ContainerDied","Data":"59fc82851f802c4d5e39c4e52510c7c40e5b86f21820d81d8225e0edfd6c2d8c"} Jan 23 06:40:38 crc kubenswrapper[4784]: I0123 06:40:38.203535 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xkjmp"] Jan 23 06:40:38 crc kubenswrapper[4784]: I0123 06:40:38.206811 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xkjmp" Jan 23 06:40:38 crc kubenswrapper[4784]: I0123 06:40:38.209637 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 06:40:38 crc kubenswrapper[4784]: I0123 06:40:38.211791 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xkjmp"] Jan 23 06:40:38 crc kubenswrapper[4784]: I0123 06:40:38.306742 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfwrc\" (UniqueName: \"kubernetes.io/projected/415ab3e9-f3df-47d9-9382-28313dc767d4-kube-api-access-bfwrc\") pod \"root-account-create-update-xkjmp\" (UID: \"415ab3e9-f3df-47d9-9382-28313dc767d4\") " pod="openstack/root-account-create-update-xkjmp" Jan 23 06:40:38 crc kubenswrapper[4784]: I0123 06:40:38.306886 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/415ab3e9-f3df-47d9-9382-28313dc767d4-operator-scripts\") pod \"root-account-create-update-xkjmp\" (UID: \"415ab3e9-f3df-47d9-9382-28313dc767d4\") " pod="openstack/root-account-create-update-xkjmp" Jan 23 06:40:38 crc kubenswrapper[4784]: I0123 06:40:38.409252 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfwrc\" (UniqueName: \"kubernetes.io/projected/415ab3e9-f3df-47d9-9382-28313dc767d4-kube-api-access-bfwrc\") pod \"root-account-create-update-xkjmp\" (UID: \"415ab3e9-f3df-47d9-9382-28313dc767d4\") " pod="openstack/root-account-create-update-xkjmp" Jan 23 06:40:38 crc kubenswrapper[4784]: I0123 06:40:38.409353 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/415ab3e9-f3df-47d9-9382-28313dc767d4-operator-scripts\") pod \"root-account-create-update-xkjmp\" (UID: \"415ab3e9-f3df-47d9-9382-28313dc767d4\") " pod="openstack/root-account-create-update-xkjmp" Jan 23 06:40:38 crc kubenswrapper[4784]: I0123 06:40:38.410743 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/415ab3e9-f3df-47d9-9382-28313dc767d4-operator-scripts\") pod \"root-account-create-update-xkjmp\" (UID: \"415ab3e9-f3df-47d9-9382-28313dc767d4\") " pod="openstack/root-account-create-update-xkjmp" Jan 23 06:40:38 crc kubenswrapper[4784]: I0123 06:40:38.437079 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfwrc\" (UniqueName: \"kubernetes.io/projected/415ab3e9-f3df-47d9-9382-28313dc767d4-kube-api-access-bfwrc\") pod \"root-account-create-update-xkjmp\" (UID: \"415ab3e9-f3df-47d9-9382-28313dc767d4\") " pod="openstack/root-account-create-update-xkjmp" Jan 23 06:40:38 crc kubenswrapper[4784]: I0123 06:40:38.532214 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xkjmp" Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.288494 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" podUID="3f489a74-d7ce-4b5f-90f9-a1075b8e6b97" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.556170 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:40 crc kubenswrapper[4784]: E0123 06:40:40.556830 4784 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 06:40:40 crc kubenswrapper[4784]: E0123 06:40:40.556844 4784 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 06:40:40 crc kubenswrapper[4784]: E0123 06:40:40.556892 4784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift podName:abb5c886-7378-4bdd-b56a-cc803db75cbd nodeName:}" failed. No retries permitted until 2026-01-23 06:40:56.556877952 +0000 UTC m=+1259.789385926 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift") pod "swift-storage-0" (UID: "abb5c886-7378-4bdd-b56a-cc803db75cbd") : configmap "swift-ring-files" not found Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.652323 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.760344 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-config\") pod \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.760467 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-dns-svc\") pod \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.760533 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lrtk\" (UniqueName: \"kubernetes.io/projected/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-kube-api-access-8lrtk\") pod \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.760673 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-ovsdbserver-nb\") pod \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\" (UID: \"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97\") " Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.777027 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-kube-api-access-8lrtk" (OuterVolumeSpecName: "kube-api-access-8lrtk") pod "3f489a74-d7ce-4b5f-90f9-a1075b8e6b97" (UID: "3f489a74-d7ce-4b5f-90f9-a1075b8e6b97"). InnerVolumeSpecName "kube-api-access-8lrtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.799859 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" event={"ID":"3f489a74-d7ce-4b5f-90f9-a1075b8e6b97","Type":"ContainerDied","Data":"6538cfb7e1af76f403522ed2a2c532f39235b7a440d03582843b843f9943dab9"} Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.799982 4784 scope.go:117] "RemoveContainer" containerID="59fc82851f802c4d5e39c4e52510c7c40e5b86f21820d81d8225e0edfd6c2d8c" Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.799978 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.842445 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-config" (OuterVolumeSpecName: "config") pod "3f489a74-d7ce-4b5f-90f9-a1075b8e6b97" (UID: "3f489a74-d7ce-4b5f-90f9-a1075b8e6b97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.863998 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lrtk\" (UniqueName: \"kubernetes.io/projected/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-kube-api-access-8lrtk\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.864031 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.864516 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f489a74-d7ce-4b5f-90f9-a1075b8e6b97" (UID: "3f489a74-d7ce-4b5f-90f9-a1075b8e6b97"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.867337 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3f489a74-d7ce-4b5f-90f9-a1075b8e6b97" (UID: "3f489a74-d7ce-4b5f-90f9-a1075b8e6b97"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.876004 4784 scope.go:117] "RemoveContainer" containerID="e7d08b9432787000e98db0e797dbc5bcc97fdeb8c3306b28df4cb955001f3582" Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.965656 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:40 crc kubenswrapper[4784]: I0123 06:40:40.965688 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.012938 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e86c-account-create-update-6rl54"] Jan 23 06:40:41 crc kubenswrapper[4784]: W0123 06:40:41.035422 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod398711da_15cd_410f_8a7f_8ba41455e438.slice/crio-340e7d8f3ebbdb6c0bdcfeafcf39a21cee820b8cb587afea264cad198b97c2a5 WatchSource:0}: Error finding container 340e7d8f3ebbdb6c0bdcfeafcf39a21cee820b8cb587afea264cad198b97c2a5: Status 404 returned error can't find the container with id 340e7d8f3ebbdb6c0bdcfeafcf39a21cee820b8cb587afea264cad198b97c2a5 Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.322121 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rgshg"] Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.347317 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rgshg"] Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.366769 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8fqbr"] Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.407421 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-44aa-account-create-update-tc767"] Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.484126 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-zbpdg"] Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.692963 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-dqfph"] Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.728840 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-06bd-account-create-update-pn6qb"] Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.743883 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-d401-account-create-update-vp8qw"] Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.751225 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xkjmp"] Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.762416 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c6jcv"] Jan 23 06:40:41 crc kubenswrapper[4784]: W0123 06:40:41.787852 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8f92e52_4089_4f9a_90bc_a606d37b058d.slice/crio-3e85236e84a671cc7e3a018c1c0d770b3fcc38f33272e74381f2938a022f16f6 WatchSource:0}: Error finding container 3e85236e84a671cc7e3a018c1c0d770b3fcc38f33272e74381f2938a022f16f6: Status 404 returned error can't find the container with id 3e85236e84a671cc7e3a018c1c0d770b3fcc38f33272e74381f2938a022f16f6 Jan 23 06:40:41 crc kubenswrapper[4784]: W0123 06:40:41.788530 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf878f255_96b1_4ac5_89ab_6890e1ada898.slice/crio-3bc9d57351663414fe790c2e0fe7f4642c46bee559cf00a8c238299bd207fdc3 WatchSource:0}: Error finding container 3bc9d57351663414fe790c2e0fe7f4642c46bee559cf00a8c238299bd207fdc3: Status 404 returned error can't find the container with id 3bc9d57351663414fe790c2e0fe7f4642c46bee559cf00a8c238299bd207fdc3 Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.819335 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-06bd-account-create-update-pn6qb" event={"ID":"d3922ff9-5f68-4ef0-8a15-d0b4b566e78b","Type":"ContainerStarted","Data":"756330fa92870345d22f44ee244d79a2a6b3c62a0fde635270aef237d80b8ecc"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.823283 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-dqfph" event={"ID":"ab2b3705-b4ae-41bc-961c-b249f979ce40","Type":"ContainerStarted","Data":"74b71994d56ab960cdb559c4a535da83ad12d6fc89a1b2c4ac7dba25840c55aa"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.830835 4784 generic.go:334] "Generic (PLEG): container finished" podID="398711da-15cd-410f-8a7f-8ba41455e438" containerID="4265f0db769a3a352a0e60c5895daf630075b8977facb0b8ae6f4b62ea89b803" exitCode=0 Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.830907 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e86c-account-create-update-6rl54" event={"ID":"398711da-15cd-410f-8a7f-8ba41455e438","Type":"ContainerDied","Data":"4265f0db769a3a352a0e60c5895daf630075b8977facb0b8ae6f4b62ea89b803"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.830963 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e86c-account-create-update-6rl54" event={"ID":"398711da-15cd-410f-8a7f-8ba41455e438","Type":"ContainerStarted","Data":"340e7d8f3ebbdb6c0bdcfeafcf39a21cee820b8cb587afea264cad198b97c2a5"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.839935 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"347f59fd-0378-4413-8880-7d7e9fe9a859","Type":"ContainerStarted","Data":"9c4821b368a785659445a70da81187ff14092dcaa790f9d0352a62fa1d701929"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.842847 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8fqbr" event={"ID":"0a02b591-5a08-4a50-a248-9d6fb8c9e13e","Type":"ContainerStarted","Data":"bce5201a338d649c6897cd05de92f8fa4a29c986753786f3c945739266e30fc4"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.842926 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8fqbr" event={"ID":"0a02b591-5a08-4a50-a248-9d6fb8c9e13e","Type":"ContainerStarted","Data":"746242d3db7d7e886b419e69e21c5390ba9b67f2ec6b42bfc18ceb716822b538"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.852680 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2317a2c2-318f-46c1-98d0-61c93c840b91","Type":"ContainerStarted","Data":"d4cb5743aa982539ba37cc4045a98044b4ba70204a5004e49e0c01b4e1b820ad"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.854590 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c6jcv" event={"ID":"f8f92e52-4089-4f9a-90bc-a606d37b058d","Type":"ContainerStarted","Data":"3e85236e84a671cc7e3a018c1c0d770b3fcc38f33272e74381f2938a022f16f6"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.855705 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xkjmp" event={"ID":"415ab3e9-f3df-47d9-9382-28313dc767d4","Type":"ContainerStarted","Data":"06eb0117fc6440b11d215a17a3c132e744d3e2fb5008d7eb2e34a90b16ea7768"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.871549 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-8fqbr" podStartSLOduration=10.871514908 podStartE2EDuration="10.871514908s" podCreationTimestamp="2026-01-23 06:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:40:41.868811621 +0000 UTC m=+1245.101319615" watchObservedRunningTime="2026-01-23 06:40:41.871514908 +0000 UTC m=+1245.104022882" Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.906972 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zbpdg" event={"ID":"56f31b47-1781-4d5a-b7ee-13ec522694d8","Type":"ContainerStarted","Data":"0174c223af2d13c5f032c923ab6225eeba946f0503767b538c856787040477cf"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.907034 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zbpdg" event={"ID":"56f31b47-1781-4d5a-b7ee-13ec522694d8","Type":"ContainerStarted","Data":"0623dd160ec5f9a83089fd608ee7de919c7f03b4a9c09867808b4d639b920e71"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.910644 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-d401-account-create-update-vp8qw" event={"ID":"f878f255-96b1-4ac5-89ab-6890e1ada898","Type":"ContainerStarted","Data":"3bc9d57351663414fe790c2e0fe7f4642c46bee559cf00a8c238299bd207fdc3"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.915465 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-44aa-account-create-update-tc767" event={"ID":"456b7f3f-ca26-4bf9-944f-fb93921474fd","Type":"ContainerStarted","Data":"a267b004bcad066ff3307519e0042f383420edf3b98aa4a3aa2404f0beeac50c"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.915540 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-44aa-account-create-update-tc767" event={"ID":"456b7f3f-ca26-4bf9-944f-fb93921474fd","Type":"ContainerStarted","Data":"234a6fe63acc873a1a6dad037e39f4671a3369e4f2180ba0e9e51db5bdd17d8d"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.922033 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v8cqj" event={"ID":"008ddd6f-ae82-41ee-a0d7-ad63e2880889","Type":"ContainerStarted","Data":"c512c0003b2aef8f72bd1a110c893600342444324eae6d12af0c7e716279b291"} Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.928031 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-zbpdg" podStartSLOduration=10.928000736 podStartE2EDuration="10.928000736s" podCreationTimestamp="2026-01-23 06:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:40:41.927927614 +0000 UTC m=+1245.160435608" watchObservedRunningTime="2026-01-23 06:40:41.928000736 +0000 UTC m=+1245.160508710" Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.933721 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.058939092 podStartE2EDuration="1m16.933704306s" podCreationTimestamp="2026-01-23 06:39:25 +0000 UTC" firstStartedPulling="2026-01-23 06:39:33.821575057 +0000 UTC m=+1177.054083031" lastFinishedPulling="2026-01-23 06:40:40.696340271 +0000 UTC m=+1243.928848245" observedRunningTime="2026-01-23 06:40:41.908053156 +0000 UTC m=+1245.140561130" watchObservedRunningTime="2026-01-23 06:40:41.933704306 +0000 UTC m=+1245.166212270" Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.955969 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-44aa-account-create-update-tc767" podStartSLOduration=10.955927573 podStartE2EDuration="10.955927573s" podCreationTimestamp="2026-01-23 06:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:40:41.947159907 +0000 UTC m=+1245.179667881" watchObservedRunningTime="2026-01-23 06:40:41.955927573 +0000 UTC m=+1245.188435547" Jan 23 06:40:41 crc kubenswrapper[4784]: I0123 06:40:41.972352 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-v8cqj" podStartSLOduration=3.039186303 podStartE2EDuration="13.972269395s" podCreationTimestamp="2026-01-23 06:40:28 +0000 UTC" firstStartedPulling="2026-01-23 06:40:29.54241896 +0000 UTC m=+1232.774926934" lastFinishedPulling="2026-01-23 06:40:40.475502052 +0000 UTC m=+1243.708010026" observedRunningTime="2026-01-23 06:40:41.964687988 +0000 UTC m=+1245.197195992" watchObservedRunningTime="2026-01-23 06:40:41.972269395 +0000 UTC m=+1245.204777369" Jan 23 06:40:42 crc kubenswrapper[4784]: I0123 06:40:42.826307 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 23 06:40:42 crc kubenswrapper[4784]: I0123 06:40:42.826900 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 23 06:40:42 crc kubenswrapper[4784]: I0123 06:40:42.934552 4784 generic.go:334] "Generic (PLEG): container finished" podID="56f31b47-1781-4d5a-b7ee-13ec522694d8" containerID="0174c223af2d13c5f032c923ab6225eeba946f0503767b538c856787040477cf" exitCode=0 Jan 23 06:40:42 crc kubenswrapper[4784]: I0123 06:40:42.934623 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zbpdg" event={"ID":"56f31b47-1781-4d5a-b7ee-13ec522694d8","Type":"ContainerDied","Data":"0174c223af2d13c5f032c923ab6225eeba946f0503767b538c856787040477cf"} Jan 23 06:40:42 crc kubenswrapper[4784]: I0123 06:40:42.944033 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-dqfph" event={"ID":"ab2b3705-b4ae-41bc-961c-b249f979ce40","Type":"ContainerStarted","Data":"6f54f5e2e6870280636a0767f6b0ae631cba4e8e5b12c4910b6b39b14cdf5e7b"} Jan 23 06:40:42 crc kubenswrapper[4784]: I0123 06:40:42.946792 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-d401-account-create-update-vp8qw" event={"ID":"f878f255-96b1-4ac5-89ab-6890e1ada898","Type":"ContainerStarted","Data":"b113c8d058e9a8bfa4b5384b482cb01c185bffb8c37c32fa00ba7ef9d139cc0d"} Jan 23 06:40:42 crc kubenswrapper[4784]: I0123 06:40:42.949736 4784 generic.go:334] "Generic (PLEG): container finished" podID="456b7f3f-ca26-4bf9-944f-fb93921474fd" containerID="a267b004bcad066ff3307519e0042f383420edf3b98aa4a3aa2404f0beeac50c" exitCode=0 Jan 23 06:40:42 crc kubenswrapper[4784]: I0123 06:40:42.949829 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-44aa-account-create-update-tc767" event={"ID":"456b7f3f-ca26-4bf9-944f-fb93921474fd","Type":"ContainerDied","Data":"a267b004bcad066ff3307519e0042f383420edf3b98aa4a3aa2404f0beeac50c"} Jan 23 06:40:42 crc kubenswrapper[4784]: I0123 06:40:42.954446 4784 generic.go:334] "Generic (PLEG): container finished" podID="0a02b591-5a08-4a50-a248-9d6fb8c9e13e" containerID="bce5201a338d649c6897cd05de92f8fa4a29c986753786f3c945739266e30fc4" exitCode=0 Jan 23 06:40:42 crc kubenswrapper[4784]: I0123 06:40:42.954501 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8fqbr" event={"ID":"0a02b591-5a08-4a50-a248-9d6fb8c9e13e","Type":"ContainerDied","Data":"bce5201a338d649c6897cd05de92f8fa4a29c986753786f3c945739266e30fc4"} Jan 23 06:40:42 crc kubenswrapper[4784]: I0123 06:40:42.958286 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-06bd-account-create-update-pn6qb" event={"ID":"d3922ff9-5f68-4ef0-8a15-d0b4b566e78b","Type":"ContainerStarted","Data":"575e659b056a0d140f3523036f9c949c31d7be4c8b01ce54b34960a8b27c76f6"} Jan 23 06:40:42 crc kubenswrapper[4784]: I0123 06:40:42.984717 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-dqfph" podStartSLOduration=9.984690041 podStartE2EDuration="9.984690041s" podCreationTimestamp="2026-01-23 06:40:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:40:42.977796432 +0000 UTC m=+1246.210304406" watchObservedRunningTime="2026-01-23 06:40:42.984690041 +0000 UTC m=+1246.217198015" Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.036294 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-06bd-account-create-update-pn6qb" podStartSLOduration=12.036261509 podStartE2EDuration="12.036261509s" podCreationTimestamp="2026-01-23 06:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:40:43.035626433 +0000 UTC m=+1246.268134407" watchObservedRunningTime="2026-01-23 06:40:43.036261509 +0000 UTC m=+1246.268769483" Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.267344 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f489a74-d7ce-4b5f-90f9-a1075b8e6b97" path="/var/lib/kubelet/pods/3f489a74-d7ce-4b5f-90f9-a1075b8e6b97/volumes" Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.520732 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e86c-account-create-update-6rl54" Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.649901 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398711da-15cd-410f-8a7f-8ba41455e438-operator-scripts\") pod \"398711da-15cd-410f-8a7f-8ba41455e438\" (UID: \"398711da-15cd-410f-8a7f-8ba41455e438\") " Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.649971 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqcpj\" (UniqueName: \"kubernetes.io/projected/398711da-15cd-410f-8a7f-8ba41455e438-kube-api-access-cqcpj\") pod \"398711da-15cd-410f-8a7f-8ba41455e438\" (UID: \"398711da-15cd-410f-8a7f-8ba41455e438\") " Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.651817 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/398711da-15cd-410f-8a7f-8ba41455e438-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "398711da-15cd-410f-8a7f-8ba41455e438" (UID: "398711da-15cd-410f-8a7f-8ba41455e438"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.657683 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/398711da-15cd-410f-8a7f-8ba41455e438-kube-api-access-cqcpj" (OuterVolumeSpecName: "kube-api-access-cqcpj") pod "398711da-15cd-410f-8a7f-8ba41455e438" (UID: "398711da-15cd-410f-8a7f-8ba41455e438"). InnerVolumeSpecName "kube-api-access-cqcpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.755825 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398711da-15cd-410f-8a7f-8ba41455e438-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.755877 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqcpj\" (UniqueName: \"kubernetes.io/projected/398711da-15cd-410f-8a7f-8ba41455e438-kube-api-access-cqcpj\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.935074 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.977144 4784 generic.go:334] "Generic (PLEG): container finished" podID="f8f92e52-4089-4f9a-90bc-a606d37b058d" containerID="80977d5826b9f086d45f13c7d275bbd2cee5caf6c832bd1b8b2f1fe171894961" exitCode=0 Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.977238 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c6jcv" event={"ID":"f8f92e52-4089-4f9a-90bc-a606d37b058d","Type":"ContainerDied","Data":"80977d5826b9f086d45f13c7d275bbd2cee5caf6c832bd1b8b2f1fe171894961"} Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.981767 4784 generic.go:334] "Generic (PLEG): container finished" podID="415ab3e9-f3df-47d9-9382-28313dc767d4" containerID="f5ebaf4ee0dd3216164b2a5f19c0af5b91c6f17768645a2ebd502373609b5cb7" exitCode=0 Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.981836 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xkjmp" event={"ID":"415ab3e9-f3df-47d9-9382-28313dc767d4","Type":"ContainerDied","Data":"f5ebaf4ee0dd3216164b2a5f19c0af5b91c6f17768645a2ebd502373609b5cb7"} Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.991894 4784 generic.go:334] "Generic (PLEG): container finished" podID="ab2b3705-b4ae-41bc-961c-b249f979ce40" containerID="6f54f5e2e6870280636a0767f6b0ae631cba4e8e5b12c4910b6b39b14cdf5e7b" exitCode=0 Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.991982 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-dqfph" event={"ID":"ab2b3705-b4ae-41bc-961c-b249f979ce40","Type":"ContainerDied","Data":"6f54f5e2e6870280636a0767f6b0ae631cba4e8e5b12c4910b6b39b14cdf5e7b"} Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.995879 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e86c-account-create-update-6rl54" event={"ID":"398711da-15cd-410f-8a7f-8ba41455e438","Type":"ContainerDied","Data":"340e7d8f3ebbdb6c0bdcfeafcf39a21cee820b8cb587afea264cad198b97c2a5"} Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.995908 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="340e7d8f3ebbdb6c0bdcfeafcf39a21cee820b8cb587afea264cad198b97c2a5" Jan 23 06:40:43 crc kubenswrapper[4784]: I0123 06:40:43.995973 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e86c-account-create-update-6rl54" Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.004475 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"347f59fd-0378-4413-8880-7d7e9fe9a859","Type":"ContainerStarted","Data":"ee4f104cdea26e016b99070d2bb297037661a571a42f20c04ca04a25c8f53e70"} Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.006274 4784 generic.go:334] "Generic (PLEG): container finished" podID="f878f255-96b1-4ac5-89ab-6890e1ada898" containerID="b113c8d058e9a8bfa4b5384b482cb01c185bffb8c37c32fa00ba7ef9d139cc0d" exitCode=0 Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.006372 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-d401-account-create-update-vp8qw" event={"ID":"f878f255-96b1-4ac5-89ab-6890e1ada898","Type":"ContainerDied","Data":"b113c8d058e9a8bfa4b5384b482cb01c185bffb8c37c32fa00ba7ef9d139cc0d"} Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.009612 4784 generic.go:334] "Generic (PLEG): container finished" podID="d3922ff9-5f68-4ef0-8a15-d0b4b566e78b" containerID="575e659b056a0d140f3523036f9c949c31d7be4c8b01ce54b34960a8b27c76f6" exitCode=0 Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.009795 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-06bd-account-create-update-pn6qb" event={"ID":"d3922ff9-5f68-4ef0-8a15-d0b4b566e78b","Type":"ContainerDied","Data":"575e659b056a0d140f3523036f9c949c31d7be4c8b01ce54b34960a8b27c76f6"} Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.492313 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8fqbr" Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.578487 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvpmm\" (UniqueName: \"kubernetes.io/projected/0a02b591-5a08-4a50-a248-9d6fb8c9e13e-kube-api-access-vvpmm\") pod \"0a02b591-5a08-4a50-a248-9d6fb8c9e13e\" (UID: \"0a02b591-5a08-4a50-a248-9d6fb8c9e13e\") " Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.578704 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a02b591-5a08-4a50-a248-9d6fb8c9e13e-operator-scripts\") pod \"0a02b591-5a08-4a50-a248-9d6fb8c9e13e\" (UID: \"0a02b591-5a08-4a50-a248-9d6fb8c9e13e\") " Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.579407 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a02b591-5a08-4a50-a248-9d6fb8c9e13e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a02b591-5a08-4a50-a248-9d6fb8c9e13e" (UID: "0a02b591-5a08-4a50-a248-9d6fb8c9e13e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.580051 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a02b591-5a08-4a50-a248-9d6fb8c9e13e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.591830 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a02b591-5a08-4a50-a248-9d6fb8c9e13e-kube-api-access-vvpmm" (OuterVolumeSpecName: "kube-api-access-vvpmm") pod "0a02b591-5a08-4a50-a248-9d6fb8c9e13e" (UID: "0a02b591-5a08-4a50-a248-9d6fb8c9e13e"). InnerVolumeSpecName "kube-api-access-vvpmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.667126 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zbpdg" Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.672967 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-44aa-account-create-update-tc767" Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.681829 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvpmm\" (UniqueName: \"kubernetes.io/projected/0a02b591-5a08-4a50-a248-9d6fb8c9e13e-kube-api-access-vvpmm\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.783206 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvj7z\" (UniqueName: \"kubernetes.io/projected/456b7f3f-ca26-4bf9-944f-fb93921474fd-kube-api-access-cvj7z\") pod \"456b7f3f-ca26-4bf9-944f-fb93921474fd\" (UID: \"456b7f3f-ca26-4bf9-944f-fb93921474fd\") " Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.783448 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kws5r\" (UniqueName: \"kubernetes.io/projected/56f31b47-1781-4d5a-b7ee-13ec522694d8-kube-api-access-kws5r\") pod \"56f31b47-1781-4d5a-b7ee-13ec522694d8\" (UID: \"56f31b47-1781-4d5a-b7ee-13ec522694d8\") " Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.783555 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/456b7f3f-ca26-4bf9-944f-fb93921474fd-operator-scripts\") pod \"456b7f3f-ca26-4bf9-944f-fb93921474fd\" (UID: \"456b7f3f-ca26-4bf9-944f-fb93921474fd\") " Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.783607 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56f31b47-1781-4d5a-b7ee-13ec522694d8-operator-scripts\") pod \"56f31b47-1781-4d5a-b7ee-13ec522694d8\" (UID: \"56f31b47-1781-4d5a-b7ee-13ec522694d8\") " Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.784236 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/456b7f3f-ca26-4bf9-944f-fb93921474fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "456b7f3f-ca26-4bf9-944f-fb93921474fd" (UID: "456b7f3f-ca26-4bf9-944f-fb93921474fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.784358 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f31b47-1781-4d5a-b7ee-13ec522694d8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56f31b47-1781-4d5a-b7ee-13ec522694d8" (UID: "56f31b47-1781-4d5a-b7ee-13ec522694d8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.787272 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456b7f3f-ca26-4bf9-944f-fb93921474fd-kube-api-access-cvj7z" (OuterVolumeSpecName: "kube-api-access-cvj7z") pod "456b7f3f-ca26-4bf9-944f-fb93921474fd" (UID: "456b7f3f-ca26-4bf9-944f-fb93921474fd"). InnerVolumeSpecName "kube-api-access-cvj7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.787837 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f31b47-1781-4d5a-b7ee-13ec522694d8-kube-api-access-kws5r" (OuterVolumeSpecName: "kube-api-access-kws5r") pod "56f31b47-1781-4d5a-b7ee-13ec522694d8" (UID: "56f31b47-1781-4d5a-b7ee-13ec522694d8"). InnerVolumeSpecName "kube-api-access-kws5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.886463 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/456b7f3f-ca26-4bf9-944f-fb93921474fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.886500 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56f31b47-1781-4d5a-b7ee-13ec522694d8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.886511 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvj7z\" (UniqueName: \"kubernetes.io/projected/456b7f3f-ca26-4bf9-944f-fb93921474fd-kube-api-access-cvj7z\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:44 crc kubenswrapper[4784]: I0123 06:40:44.886526 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kws5r\" (UniqueName: \"kubernetes.io/projected/56f31b47-1781-4d5a-b7ee-13ec522694d8-kube-api-access-kws5r\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.020113 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-44aa-account-create-update-tc767" event={"ID":"456b7f3f-ca26-4bf9-944f-fb93921474fd","Type":"ContainerDied","Data":"234a6fe63acc873a1a6dad037e39f4671a3369e4f2180ba0e9e51db5bdd17d8d"} Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.020215 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-44aa-account-create-update-tc767" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.020193 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="234a6fe63acc873a1a6dad037e39f4671a3369e4f2180ba0e9e51db5bdd17d8d" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.023636 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8fqbr" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.023643 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8fqbr" event={"ID":"0a02b591-5a08-4a50-a248-9d6fb8c9e13e","Type":"ContainerDied","Data":"746242d3db7d7e886b419e69e21c5390ba9b67f2ec6b42bfc18ceb716822b538"} Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.023785 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="746242d3db7d7e886b419e69e21c5390ba9b67f2ec6b42bfc18ceb716822b538" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.025949 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-zbpdg" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.037833 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-zbpdg" event={"ID":"56f31b47-1781-4d5a-b7ee-13ec522694d8","Type":"ContainerDied","Data":"0623dd160ec5f9a83089fd608ee7de919c7f03b4a9c09867808b4d639b920e71"} Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.037930 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0623dd160ec5f9a83089fd608ee7de919c7f03b4a9c09867808b4d639b920e71" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.290086 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fd796d7df-rgshg" podUID="3f489a74-d7ce-4b5f-90f9-a1075b8e6b97" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.331304 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c6jcv" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.506215 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8f92e52-4089-4f9a-90bc-a606d37b058d-operator-scripts\") pod \"f8f92e52-4089-4f9a-90bc-a606d37b058d\" (UID: \"f8f92e52-4089-4f9a-90bc-a606d37b058d\") " Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.506885 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8f92e52-4089-4f9a-90bc-a606d37b058d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8f92e52-4089-4f9a-90bc-a606d37b058d" (UID: "f8f92e52-4089-4f9a-90bc-a606d37b058d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.506950 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7mmg\" (UniqueName: \"kubernetes.io/projected/f8f92e52-4089-4f9a-90bc-a606d37b058d-kube-api-access-r7mmg\") pod \"f8f92e52-4089-4f9a-90bc-a606d37b058d\" (UID: \"f8f92e52-4089-4f9a-90bc-a606d37b058d\") " Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.507933 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8f92e52-4089-4f9a-90bc-a606d37b058d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.513659 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8f92e52-4089-4f9a-90bc-a606d37b058d-kube-api-access-r7mmg" (OuterVolumeSpecName: "kube-api-access-r7mmg") pod "f8f92e52-4089-4f9a-90bc-a606d37b058d" (UID: "f8f92e52-4089-4f9a-90bc-a606d37b058d"). InnerVolumeSpecName "kube-api-access-r7mmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.610427 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7mmg\" (UniqueName: \"kubernetes.io/projected/f8f92e52-4089-4f9a-90bc-a606d37b058d-kube-api-access-r7mmg\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.770856 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-dqfph" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.779244 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-d401-account-create-update-vp8qw" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.795938 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06bd-account-create-update-pn6qb" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.837451 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xkjmp" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.886782 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.923089 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcxcz\" (UniqueName: \"kubernetes.io/projected/d3922ff9-5f68-4ef0-8a15-d0b4b566e78b-kube-api-access-qcxcz\") pod \"d3922ff9-5f68-4ef0-8a15-d0b4b566e78b\" (UID: \"d3922ff9-5f68-4ef0-8a15-d0b4b566e78b\") " Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.923185 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab2b3705-b4ae-41bc-961c-b249f979ce40-operator-scripts\") pod \"ab2b3705-b4ae-41bc-961c-b249f979ce40\" (UID: \"ab2b3705-b4ae-41bc-961c-b249f979ce40\") " Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.923281 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3922ff9-5f68-4ef0-8a15-d0b4b566e78b-operator-scripts\") pod \"d3922ff9-5f68-4ef0-8a15-d0b4b566e78b\" (UID: \"d3922ff9-5f68-4ef0-8a15-d0b4b566e78b\") " Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.923318 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78p2z\" (UniqueName: \"kubernetes.io/projected/f878f255-96b1-4ac5-89ab-6890e1ada898-kube-api-access-78p2z\") pod \"f878f255-96b1-4ac5-89ab-6890e1ada898\" (UID: \"f878f255-96b1-4ac5-89ab-6890e1ada898\") " Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.923367 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72knl\" (UniqueName: \"kubernetes.io/projected/ab2b3705-b4ae-41bc-961c-b249f979ce40-kube-api-access-72knl\") pod \"ab2b3705-b4ae-41bc-961c-b249f979ce40\" (UID: \"ab2b3705-b4ae-41bc-961c-b249f979ce40\") " Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.923497 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f878f255-96b1-4ac5-89ab-6890e1ada898-operator-scripts\") pod \"f878f255-96b1-4ac5-89ab-6890e1ada898\" (UID: \"f878f255-96b1-4ac5-89ab-6890e1ada898\") " Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.924311 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f878f255-96b1-4ac5-89ab-6890e1ada898-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f878f255-96b1-4ac5-89ab-6890e1ada898" (UID: "f878f255-96b1-4ac5-89ab-6890e1ada898"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.924650 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3922ff9-5f68-4ef0-8a15-d0b4b566e78b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3922ff9-5f68-4ef0-8a15-d0b4b566e78b" (UID: "d3922ff9-5f68-4ef0-8a15-d0b4b566e78b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.925041 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab2b3705-b4ae-41bc-961c-b249f979ce40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab2b3705-b4ae-41bc-961c-b249f979ce40" (UID: "ab2b3705-b4ae-41bc-961c-b249f979ce40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.928363 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3922ff9-5f68-4ef0-8a15-d0b4b566e78b-kube-api-access-qcxcz" (OuterVolumeSpecName: "kube-api-access-qcxcz") pod "d3922ff9-5f68-4ef0-8a15-d0b4b566e78b" (UID: "d3922ff9-5f68-4ef0-8a15-d0b4b566e78b"). InnerVolumeSpecName "kube-api-access-qcxcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.928302 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2b3705-b4ae-41bc-961c-b249f979ce40-kube-api-access-72knl" (OuterVolumeSpecName: "kube-api-access-72knl") pod "ab2b3705-b4ae-41bc-961c-b249f979ce40" (UID: "ab2b3705-b4ae-41bc-961c-b249f979ce40"). InnerVolumeSpecName "kube-api-access-72knl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:45 crc kubenswrapper[4784]: I0123 06:40:45.938727 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f878f255-96b1-4ac5-89ab-6890e1ada898-kube-api-access-78p2z" (OuterVolumeSpecName: "kube-api-access-78p2z") pod "f878f255-96b1-4ac5-89ab-6890e1ada898" (UID: "f878f255-96b1-4ac5-89ab-6890e1ada898"). InnerVolumeSpecName "kube-api-access-78p2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.027886 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfwrc\" (UniqueName: \"kubernetes.io/projected/415ab3e9-f3df-47d9-9382-28313dc767d4-kube-api-access-bfwrc\") pod \"415ab3e9-f3df-47d9-9382-28313dc767d4\" (UID: \"415ab3e9-f3df-47d9-9382-28313dc767d4\") " Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.027949 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/415ab3e9-f3df-47d9-9382-28313dc767d4-operator-scripts\") pod \"415ab3e9-f3df-47d9-9382-28313dc767d4\" (UID: \"415ab3e9-f3df-47d9-9382-28313dc767d4\") " Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.028960 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3922ff9-5f68-4ef0-8a15-d0b4b566e78b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.028983 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78p2z\" (UniqueName: \"kubernetes.io/projected/f878f255-96b1-4ac5-89ab-6890e1ada898-kube-api-access-78p2z\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.028996 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72knl\" (UniqueName: \"kubernetes.io/projected/ab2b3705-b4ae-41bc-961c-b249f979ce40-kube-api-access-72knl\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.029008 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f878f255-96b1-4ac5-89ab-6890e1ada898-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.029018 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcxcz\" (UniqueName: \"kubernetes.io/projected/d3922ff9-5f68-4ef0-8a15-d0b4b566e78b-kube-api-access-qcxcz\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.029027 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab2b3705-b4ae-41bc-961c-b249f979ce40-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.031131 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/415ab3e9-f3df-47d9-9382-28313dc767d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "415ab3e9-f3df-47d9-9382-28313dc767d4" (UID: "415ab3e9-f3df-47d9-9382-28313dc767d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.034345 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/415ab3e9-f3df-47d9-9382-28313dc767d4-kube-api-access-bfwrc" (OuterVolumeSpecName: "kube-api-access-bfwrc") pod "415ab3e9-f3df-47d9-9382-28313dc767d4" (UID: "415ab3e9-f3df-47d9-9382-28313dc767d4"). InnerVolumeSpecName "kube-api-access-bfwrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.039183 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06bd-account-create-update-pn6qb" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.039369 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-06bd-account-create-update-pn6qb" event={"ID":"d3922ff9-5f68-4ef0-8a15-d0b4b566e78b","Type":"ContainerDied","Data":"756330fa92870345d22f44ee244d79a2a6b3c62a0fde635270aef237d80b8ecc"} Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.039532 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="756330fa92870345d22f44ee244d79a2a6b3c62a0fde635270aef237d80b8ecc" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.042195 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c6jcv" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.043265 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c6jcv" event={"ID":"f8f92e52-4089-4f9a-90bc-a606d37b058d","Type":"ContainerDied","Data":"3e85236e84a671cc7e3a018c1c0d770b3fcc38f33272e74381f2938a022f16f6"} Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.043317 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e85236e84a671cc7e3a018c1c0d770b3fcc38f33272e74381f2938a022f16f6" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.048382 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xkjmp" event={"ID":"415ab3e9-f3df-47d9-9382-28313dc767d4","Type":"ContainerDied","Data":"06eb0117fc6440b11d215a17a3c132e744d3e2fb5008d7eb2e34a90b16ea7768"} Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.048407 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xkjmp" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.048450 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06eb0117fc6440b11d215a17a3c132e744d3e2fb5008d7eb2e34a90b16ea7768" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.050144 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-dqfph" event={"ID":"ab2b3705-b4ae-41bc-961c-b249f979ce40","Type":"ContainerDied","Data":"74b71994d56ab960cdb559c4a535da83ad12d6fc89a1b2c4ac7dba25840c55aa"} Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.050169 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74b71994d56ab960cdb559c4a535da83ad12d6fc89a1b2c4ac7dba25840c55aa" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.050262 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-dqfph" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.059708 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-d401-account-create-update-vp8qw" event={"ID":"f878f255-96b1-4ac5-89ab-6890e1ada898","Type":"ContainerDied","Data":"3bc9d57351663414fe790c2e0fe7f4642c46bee559cf00a8c238299bd207fdc3"} Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.059813 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bc9d57351663414fe790c2e0fe7f4642c46bee559cf00a8c238299bd207fdc3" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.059958 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-d401-account-create-update-vp8qw" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.131404 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfwrc\" (UniqueName: \"kubernetes.io/projected/415ab3e9-f3df-47d9-9382-28313dc767d4-kube-api-access-bfwrc\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:46 crc kubenswrapper[4784]: I0123 06:40:46.131446 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/415ab3e9-f3df-47d9-9382-28313dc767d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.124428 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-dqv9q"] Jan 23 06:40:47 crc kubenswrapper[4784]: E0123 06:40:47.125438 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f489a74-d7ce-4b5f-90f9-a1075b8e6b97" containerName="dnsmasq-dns" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.125454 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f489a74-d7ce-4b5f-90f9-a1075b8e6b97" containerName="dnsmasq-dns" Jan 23 06:40:47 crc kubenswrapper[4784]: E0123 06:40:47.125470 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456b7f3f-ca26-4bf9-944f-fb93921474fd" containerName="mariadb-account-create-update" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.125481 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="456b7f3f-ca26-4bf9-944f-fb93921474fd" containerName="mariadb-account-create-update" Jan 23 06:40:47 crc kubenswrapper[4784]: E0123 06:40:47.125494 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f878f255-96b1-4ac5-89ab-6890e1ada898" containerName="mariadb-account-create-update" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.125501 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f878f255-96b1-4ac5-89ab-6890e1ada898" containerName="mariadb-account-create-update" Jan 23 06:40:47 crc kubenswrapper[4784]: E0123 06:40:47.125518 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f31b47-1781-4d5a-b7ee-13ec522694d8" containerName="mariadb-database-create" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.125526 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f31b47-1781-4d5a-b7ee-13ec522694d8" containerName="mariadb-database-create" Jan 23 06:40:47 crc kubenswrapper[4784]: E0123 06:40:47.125539 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415ab3e9-f3df-47d9-9382-28313dc767d4" containerName="mariadb-account-create-update" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.125549 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="415ab3e9-f3df-47d9-9382-28313dc767d4" containerName="mariadb-account-create-update" Jan 23 06:40:47 crc kubenswrapper[4784]: E0123 06:40:47.125571 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a02b591-5a08-4a50-a248-9d6fb8c9e13e" containerName="mariadb-database-create" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.125579 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a02b591-5a08-4a50-a248-9d6fb8c9e13e" containerName="mariadb-database-create" Jan 23 06:40:47 crc kubenswrapper[4784]: E0123 06:40:47.125595 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3922ff9-5f68-4ef0-8a15-d0b4b566e78b" containerName="mariadb-account-create-update" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.125603 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3922ff9-5f68-4ef0-8a15-d0b4b566e78b" containerName="mariadb-account-create-update" Jan 23 06:40:47 crc kubenswrapper[4784]: E0123 06:40:47.125617 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2b3705-b4ae-41bc-961c-b249f979ce40" containerName="mariadb-database-create" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.125624 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2b3705-b4ae-41bc-961c-b249f979ce40" containerName="mariadb-database-create" Jan 23 06:40:47 crc kubenswrapper[4784]: E0123 06:40:47.125644 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f489a74-d7ce-4b5f-90f9-a1075b8e6b97" containerName="init" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.125652 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f489a74-d7ce-4b5f-90f9-a1075b8e6b97" containerName="init" Jan 23 06:40:47 crc kubenswrapper[4784]: E0123 06:40:47.125675 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="398711da-15cd-410f-8a7f-8ba41455e438" containerName="mariadb-account-create-update" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.125683 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="398711da-15cd-410f-8a7f-8ba41455e438" containerName="mariadb-account-create-update" Jan 23 06:40:47 crc kubenswrapper[4784]: E0123 06:40:47.125692 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f92e52-4089-4f9a-90bc-a606d37b058d" containerName="mariadb-database-create" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.125702 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f92e52-4089-4f9a-90bc-a606d37b058d" containerName="mariadb-database-create" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.126013 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f878f255-96b1-4ac5-89ab-6890e1ada898" containerName="mariadb-account-create-update" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.126030 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f92e52-4089-4f9a-90bc-a606d37b058d" containerName="mariadb-database-create" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.126037 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="398711da-15cd-410f-8a7f-8ba41455e438" containerName="mariadb-account-create-update" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.126044 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a02b591-5a08-4a50-a248-9d6fb8c9e13e" containerName="mariadb-database-create" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.126053 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="456b7f3f-ca26-4bf9-944f-fb93921474fd" containerName="mariadb-account-create-update" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.126061 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2b3705-b4ae-41bc-961c-b249f979ce40" containerName="mariadb-database-create" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.126068 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f31b47-1781-4d5a-b7ee-13ec522694d8" containerName="mariadb-database-create" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.126076 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3922ff9-5f68-4ef0-8a15-d0b4b566e78b" containerName="mariadb-account-create-update" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.126090 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="415ab3e9-f3df-47d9-9382-28313dc767d4" containerName="mariadb-account-create-update" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.126104 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f489a74-d7ce-4b5f-90f9-a1075b8e6b97" containerName="dnsmasq-dns" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.127047 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dqv9q" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.129698 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.130414 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zwc5d" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.136883 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dqv9q"] Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.257608 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-db-sync-config-data\") pod \"glance-db-sync-dqv9q\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " pod="openstack/glance-db-sync-dqv9q" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.257715 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-config-data\") pod \"glance-db-sync-dqv9q\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " pod="openstack/glance-db-sync-dqv9q" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.257819 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-combined-ca-bundle\") pod \"glance-db-sync-dqv9q\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " pod="openstack/glance-db-sync-dqv9q" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.257868 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqtx6\" (UniqueName: \"kubernetes.io/projected/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-kube-api-access-nqtx6\") pod \"glance-db-sync-dqv9q\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " pod="openstack/glance-db-sync-dqv9q" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.360242 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-db-sync-config-data\") pod \"glance-db-sync-dqv9q\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " pod="openstack/glance-db-sync-dqv9q" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.360345 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-config-data\") pod \"glance-db-sync-dqv9q\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " pod="openstack/glance-db-sync-dqv9q" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.360430 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-combined-ca-bundle\") pod \"glance-db-sync-dqv9q\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " pod="openstack/glance-db-sync-dqv9q" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.360451 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqtx6\" (UniqueName: \"kubernetes.io/projected/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-kube-api-access-nqtx6\") pod \"glance-db-sync-dqv9q\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " pod="openstack/glance-db-sync-dqv9q" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.369511 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-db-sync-config-data\") pod \"glance-db-sync-dqv9q\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " pod="openstack/glance-db-sync-dqv9q" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.370347 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-config-data\") pod \"glance-db-sync-dqv9q\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " pod="openstack/glance-db-sync-dqv9q" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.376064 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-combined-ca-bundle\") pod \"glance-db-sync-dqv9q\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " pod="openstack/glance-db-sync-dqv9q" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.400842 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqtx6\" (UniqueName: \"kubernetes.io/projected/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-kube-api-access-nqtx6\") pod \"glance-db-sync-dqv9q\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " pod="openstack/glance-db-sync-dqv9q" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.458434 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dqv9q" Jan 23 06:40:47 crc kubenswrapper[4784]: I0123 06:40:47.869075 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.060273 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dqv9q"] Jan 23 06:40:48 crc kubenswrapper[4784]: W0123 06:40:48.066426 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a92a258_aeae_45d3_ac60_f5d9033a0e5c.slice/crio-d6eab9bcc8158d83921983fb6c600b45f459ff202ce1421e79e866eee8a12f69 WatchSource:0}: Error finding container d6eab9bcc8158d83921983fb6c600b45f459ff202ce1421e79e866eee8a12f69: Status 404 returned error can't find the container with id d6eab9bcc8158d83921983fb6c600b45f459ff202ce1421e79e866eee8a12f69 Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.084523 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dqv9q" event={"ID":"4a92a258-aeae-45d3-ac60-f5d9033a0e5c","Type":"ContainerStarted","Data":"d6eab9bcc8158d83921983fb6c600b45f459ff202ce1421e79e866eee8a12f69"} Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.295939 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.297898 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.299942 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-xdq5g" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.301827 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.301848 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.302253 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.315145 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.381495 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73417c1c-ce94-42f8-bdcb-6db903adc851-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.381917 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73417c1c-ce94-42f8-bdcb-6db903adc851-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.382112 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/73417c1c-ce94-42f8-bdcb-6db903adc851-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.382258 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73417c1c-ce94-42f8-bdcb-6db903adc851-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.382373 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdqvn\" (UniqueName: \"kubernetes.io/projected/73417c1c-ce94-42f8-bdcb-6db903adc851-kube-api-access-rdqvn\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.382466 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73417c1c-ce94-42f8-bdcb-6db903adc851-scripts\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.382558 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73417c1c-ce94-42f8-bdcb-6db903adc851-config\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.484096 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73417c1c-ce94-42f8-bdcb-6db903adc851-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.484171 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdqvn\" (UniqueName: \"kubernetes.io/projected/73417c1c-ce94-42f8-bdcb-6db903adc851-kube-api-access-rdqvn\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.484211 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73417c1c-ce94-42f8-bdcb-6db903adc851-scripts\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.484251 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73417c1c-ce94-42f8-bdcb-6db903adc851-config\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.484306 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73417c1c-ce94-42f8-bdcb-6db903adc851-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.484337 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73417c1c-ce94-42f8-bdcb-6db903adc851-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.484379 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/73417c1c-ce94-42f8-bdcb-6db903adc851-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.485987 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73417c1c-ce94-42f8-bdcb-6db903adc851-config\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.487158 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73417c1c-ce94-42f8-bdcb-6db903adc851-scripts\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.487605 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73417c1c-ce94-42f8-bdcb-6db903adc851-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.495309 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73417c1c-ce94-42f8-bdcb-6db903adc851-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.496548 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73417c1c-ce94-42f8-bdcb-6db903adc851-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.504482 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdqvn\" (UniqueName: \"kubernetes.io/projected/73417c1c-ce94-42f8-bdcb-6db903adc851-kube-api-access-rdqvn\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.511504 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/73417c1c-ce94-42f8-bdcb-6db903adc851-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"73417c1c-ce94-42f8-bdcb-6db903adc851\") " pod="openstack/ovn-northd-0" Jan 23 06:40:48 crc kubenswrapper[4784]: I0123 06:40:48.624527 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 06:40:49 crc kubenswrapper[4784]: I0123 06:40:49.731619 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xkjmp"] Jan 23 06:40:49 crc kubenswrapper[4784]: I0123 06:40:49.739970 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xkjmp"] Jan 23 06:40:50 crc kubenswrapper[4784]: I0123 06:40:50.114178 4784 generic.go:334] "Generic (PLEG): container finished" podID="008ddd6f-ae82-41ee-a0d7-ad63e2880889" containerID="c512c0003b2aef8f72bd1a110c893600342444324eae6d12af0c7e716279b291" exitCode=0 Jan 23 06:40:50 crc kubenswrapper[4784]: I0123 06:40:50.114270 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v8cqj" event={"ID":"008ddd6f-ae82-41ee-a0d7-ad63e2880889","Type":"ContainerDied","Data":"c512c0003b2aef8f72bd1a110c893600342444324eae6d12af0c7e716279b291"} Jan 23 06:40:50 crc kubenswrapper[4784]: I0123 06:40:50.333218 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 06:40:51 crc kubenswrapper[4784]: I0123 06:40:51.151382 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"73417c1c-ce94-42f8-bdcb-6db903adc851","Type":"ContainerStarted","Data":"9676660ee0c80a676e949135ba171496ea269810b31202f7a4388ca623d121e9"} Jan 23 06:40:51 crc kubenswrapper[4784]: I0123 06:40:51.165348 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"347f59fd-0378-4413-8880-7d7e9fe9a859","Type":"ContainerStarted","Data":"906b02f5a9f8d935791ce9223f86e87324bcd6c42137f30d2298b20445e08329"} Jan 23 06:40:51 crc kubenswrapper[4784]: I0123 06:40:51.212210 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=4.426437784 podStartE2EDuration="1m27.212176994s" podCreationTimestamp="2026-01-23 06:39:24 +0000 UTC" firstStartedPulling="2026-01-23 06:39:27.117104569 +0000 UTC m=+1170.349612553" lastFinishedPulling="2026-01-23 06:40:49.902843789 +0000 UTC m=+1253.135351763" observedRunningTime="2026-01-23 06:40:51.197961445 +0000 UTC m=+1254.430469429" watchObservedRunningTime="2026-01-23 06:40:51.212176994 +0000 UTC m=+1254.444684968" Jan 23 06:40:51 crc kubenswrapper[4784]: I0123 06:40:51.269685 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="415ab3e9-f3df-47d9-9382-28313dc767d4" path="/var/lib/kubelet/pods/415ab3e9-f3df-47d9-9382-28313dc767d4/volumes" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.341888 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.342281 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-k5dcn" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.408077 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.475193 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-combined-ca-bundle\") pod \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.475380 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4ztd\" (UniqueName: \"kubernetes.io/projected/008ddd6f-ae82-41ee-a0d7-ad63e2880889-kube-api-access-r4ztd\") pod \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.475456 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/008ddd6f-ae82-41ee-a0d7-ad63e2880889-etc-swift\") pod \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.475516 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-dispersionconf\") pod \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.475640 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-swiftconf\") pod \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.475677 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/008ddd6f-ae82-41ee-a0d7-ad63e2880889-ring-data-devices\") pod \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.475731 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/008ddd6f-ae82-41ee-a0d7-ad63e2880889-scripts\") pod \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\" (UID: \"008ddd6f-ae82-41ee-a0d7-ad63e2880889\") " Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.477315 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/008ddd6f-ae82-41ee-a0d7-ad63e2880889-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "008ddd6f-ae82-41ee-a0d7-ad63e2880889" (UID: "008ddd6f-ae82-41ee-a0d7-ad63e2880889"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.477422 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008ddd6f-ae82-41ee-a0d7-ad63e2880889-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "008ddd6f-ae82-41ee-a0d7-ad63e2880889" (UID: "008ddd6f-ae82-41ee-a0d7-ad63e2880889"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.487207 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008ddd6f-ae82-41ee-a0d7-ad63e2880889-kube-api-access-r4ztd" (OuterVolumeSpecName: "kube-api-access-r4ztd") pod "008ddd6f-ae82-41ee-a0d7-ad63e2880889" (UID: "008ddd6f-ae82-41ee-a0d7-ad63e2880889"). InnerVolumeSpecName "kube-api-access-r4ztd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.487339 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "008ddd6f-ae82-41ee-a0d7-ad63e2880889" (UID: "008ddd6f-ae82-41ee-a0d7-ad63e2880889"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.510273 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "008ddd6f-ae82-41ee-a0d7-ad63e2880889" (UID: "008ddd6f-ae82-41ee-a0d7-ad63e2880889"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.512246 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/008ddd6f-ae82-41ee-a0d7-ad63e2880889-scripts" (OuterVolumeSpecName: "scripts") pod "008ddd6f-ae82-41ee-a0d7-ad63e2880889" (UID: "008ddd6f-ae82-41ee-a0d7-ad63e2880889"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.526080 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "008ddd6f-ae82-41ee-a0d7-ad63e2880889" (UID: "008ddd6f-ae82-41ee-a0d7-ad63e2880889"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.585441 4784 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/008ddd6f-ae82-41ee-a0d7-ad63e2880889-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.585493 4784 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.585509 4784 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.585521 4784 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/008ddd6f-ae82-41ee-a0d7-ad63e2880889-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.585532 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/008ddd6f-ae82-41ee-a0d7-ad63e2880889-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.585543 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/008ddd6f-ae82-41ee-a0d7-ad63e2880889-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.585552 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4ztd\" (UniqueName: \"kubernetes.io/projected/008ddd6f-ae82-41ee-a0d7-ad63e2880889-kube-api-access-r4ztd\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.606516 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sj5dx-config-ls5k5"] Jan 23 06:40:52 crc kubenswrapper[4784]: E0123 06:40:52.607104 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008ddd6f-ae82-41ee-a0d7-ad63e2880889" containerName="swift-ring-rebalance" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.607133 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="008ddd6f-ae82-41ee-a0d7-ad63e2880889" containerName="swift-ring-rebalance" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.607400 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="008ddd6f-ae82-41ee-a0d7-ad63e2880889" containerName="swift-ring-rebalance" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.608320 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.614281 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.615373 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sj5dx-config-ls5k5"] Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.687516 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-run\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.687624 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whzjm\" (UniqueName: \"kubernetes.io/projected/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-kube-api-access-whzjm\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.687694 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-scripts\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.687725 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-additional-scripts\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.687785 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-log-ovn\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.687833 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-run-ovn\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.790112 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whzjm\" (UniqueName: \"kubernetes.io/projected/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-kube-api-access-whzjm\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.790214 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-scripts\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.790251 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-additional-scripts\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.790295 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-log-ovn\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.790323 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-run-ovn\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.790368 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-run\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.790878 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-run\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.790877 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-run-ovn\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.790946 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-log-ovn\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.791520 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-additional-scripts\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.793197 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-scripts\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:52 crc kubenswrapper[4784]: I0123 06:40:52.811074 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whzjm\" (UniqueName: \"kubernetes.io/projected/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-kube-api-access-whzjm\") pod \"ovn-controller-sj5dx-config-ls5k5\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:53 crc kubenswrapper[4784]: I0123 06:40:53.056287 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:53 crc kubenswrapper[4784]: I0123 06:40:53.204801 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v8cqj" event={"ID":"008ddd6f-ae82-41ee-a0d7-ad63e2880889","Type":"ContainerDied","Data":"b8e83cdb7d5317e0ed28303f955d86e9f090b40de76a81b472649a3e476f5d01"} Jan 23 06:40:53 crc kubenswrapper[4784]: I0123 06:40:53.205253 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8e83cdb7d5317e0ed28303f955d86e9f090b40de76a81b472649a3e476f5d01" Jan 23 06:40:53 crc kubenswrapper[4784]: I0123 06:40:53.204874 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v8cqj" Jan 23 06:40:53 crc kubenswrapper[4784]: I0123 06:40:53.212904 4784 generic.go:334] "Generic (PLEG): container finished" podID="9e37da8a-e964-4f8b-aacc-2937130e2e7b" containerID="6c523486f92879d29f8c12e1686060624335e68261e81600c144abb26218a886" exitCode=0 Jan 23 06:40:53 crc kubenswrapper[4784]: I0123 06:40:53.212982 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e37da8a-e964-4f8b-aacc-2937130e2e7b","Type":"ContainerDied","Data":"6c523486f92879d29f8c12e1686060624335e68261e81600c144abb26218a886"} Jan 23 06:40:53 crc kubenswrapper[4784]: I0123 06:40:53.217357 4784 generic.go:334] "Generic (PLEG): container finished" podID="9e79eab6-cf02-4c69-99bd-2f3512c809f3" containerID="108ed665071583075faa37237a76c5edf56e95c94290ca4776fc25ebc9dafb9e" exitCode=0 Jan 23 06:40:53 crc kubenswrapper[4784]: I0123 06:40:53.217440 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e79eab6-cf02-4c69-99bd-2f3512c809f3","Type":"ContainerDied","Data":"108ed665071583075faa37237a76c5edf56e95c94290ca4776fc25ebc9dafb9e"} Jan 23 06:40:53 crc kubenswrapper[4784]: I0123 06:40:53.225005 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"73417c1c-ce94-42f8-bdcb-6db903adc851","Type":"ContainerStarted","Data":"bda9994883e3ec357da0902d3d0cb79e9493e9ed3b639cf7e90a5332c2c02cd7"} Jan 23 06:40:53 crc kubenswrapper[4784]: I0123 06:40:53.225095 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"73417c1c-ce94-42f8-bdcb-6db903adc851","Type":"ContainerStarted","Data":"fc3c7e3539bf2dc0e650e383bfa49006b7a709459b540f918070f3c60ce1b88c"} Jan 23 06:40:53 crc kubenswrapper[4784]: I0123 06:40:53.225590 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 23 06:40:53 crc kubenswrapper[4784]: I0123 06:40:53.271227 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.113068618 podStartE2EDuration="5.271194188s" podCreationTimestamp="2026-01-23 06:40:48 +0000 UTC" firstStartedPulling="2026-01-23 06:40:50.342468955 +0000 UTC m=+1253.574976929" lastFinishedPulling="2026-01-23 06:40:52.500594525 +0000 UTC m=+1255.733102499" observedRunningTime="2026-01-23 06:40:53.259673504 +0000 UTC m=+1256.492181488" watchObservedRunningTime="2026-01-23 06:40:53.271194188 +0000 UTC m=+1256.503702162" Jan 23 06:40:53 crc kubenswrapper[4784]: I0123 06:40:53.575504 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sj5dx-config-ls5k5"] Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.236984 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e79eab6-cf02-4c69-99bd-2f3512c809f3","Type":"ContainerStarted","Data":"fbbccf065bf4ffbd21909156a14260a505558fbf2525c2e87f43df99e4ee0a5d"} Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.237586 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.243448 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e37da8a-e964-4f8b-aacc-2937130e2e7b","Type":"ContainerStarted","Data":"bab1e179a3cb60088fb59145c12918de242c44f03ae18ef36400199e95e6c870"} Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.243681 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.245535 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sj5dx-config-ls5k5" event={"ID":"641175ec-bf26-49e2-8f87-a5e4c25ee6a6","Type":"ContainerStarted","Data":"944dacb24b0774ef942394c6c63e230b94461947eb399460b1e0214bab2aa9d5"} Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.245662 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sj5dx-config-ls5k5" event={"ID":"641175ec-bf26-49e2-8f87-a5e4c25ee6a6","Type":"ContainerStarted","Data":"ee3f4fc42dafadb30c941c594cfa3780570bfb1a063b9f1717c52ff80d5d18b0"} Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.287695 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371938.567131 podStartE2EDuration="1m38.287644664s" podCreationTimestamp="2026-01-23 06:39:16 +0000 UTC" firstStartedPulling="2026-01-23 06:39:19.160366505 +0000 UTC m=+1162.392874479" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:40:54.269419616 +0000 UTC m=+1257.501927630" watchObservedRunningTime="2026-01-23 06:40:54.287644664 +0000 UTC m=+1257.520152638" Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.315015 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.595017639 podStartE2EDuration="1m37.314985705s" podCreationTimestamp="2026-01-23 06:39:17 +0000 UTC" firstStartedPulling="2026-01-23 06:39:20.18104731 +0000 UTC m=+1163.413555284" lastFinishedPulling="2026-01-23 06:40:17.901015376 +0000 UTC m=+1221.133523350" observedRunningTime="2026-01-23 06:40:54.300302675 +0000 UTC m=+1257.532810699" watchObservedRunningTime="2026-01-23 06:40:54.314985705 +0000 UTC m=+1257.547493699" Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.325166 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sj5dx-config-ls5k5" podStartSLOduration=2.325129084 podStartE2EDuration="2.325129084s" podCreationTimestamp="2026-01-23 06:40:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:40:54.32047276 +0000 UTC m=+1257.552980754" watchObservedRunningTime="2026-01-23 06:40:54.325129084 +0000 UTC m=+1257.557637058" Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.760053 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tvgsx"] Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.762115 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tvgsx" Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.767702 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.772220 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tvgsx"] Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.842723 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86a35ddd-0a33-4ef4-86d1-11c1279b23d7-operator-scripts\") pod \"root-account-create-update-tvgsx\" (UID: \"86a35ddd-0a33-4ef4-86d1-11c1279b23d7\") " pod="openstack/root-account-create-update-tvgsx" Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.843104 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bprt\" (UniqueName: \"kubernetes.io/projected/86a35ddd-0a33-4ef4-86d1-11c1279b23d7-kube-api-access-6bprt\") pod \"root-account-create-update-tvgsx\" (UID: \"86a35ddd-0a33-4ef4-86d1-11c1279b23d7\") " pod="openstack/root-account-create-update-tvgsx" Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.946008 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86a35ddd-0a33-4ef4-86d1-11c1279b23d7-operator-scripts\") pod \"root-account-create-update-tvgsx\" (UID: \"86a35ddd-0a33-4ef4-86d1-11c1279b23d7\") " pod="openstack/root-account-create-update-tvgsx" Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.946659 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bprt\" (UniqueName: \"kubernetes.io/projected/86a35ddd-0a33-4ef4-86d1-11c1279b23d7-kube-api-access-6bprt\") pod \"root-account-create-update-tvgsx\" (UID: \"86a35ddd-0a33-4ef4-86d1-11c1279b23d7\") " pod="openstack/root-account-create-update-tvgsx" Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.947443 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86a35ddd-0a33-4ef4-86d1-11c1279b23d7-operator-scripts\") pod \"root-account-create-update-tvgsx\" (UID: \"86a35ddd-0a33-4ef4-86d1-11c1279b23d7\") " pod="openstack/root-account-create-update-tvgsx" Jan 23 06:40:54 crc kubenswrapper[4784]: I0123 06:40:54.984775 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bprt\" (UniqueName: \"kubernetes.io/projected/86a35ddd-0a33-4ef4-86d1-11c1279b23d7-kube-api-access-6bprt\") pod \"root-account-create-update-tvgsx\" (UID: \"86a35ddd-0a33-4ef4-86d1-11c1279b23d7\") " pod="openstack/root-account-create-update-tvgsx" Jan 23 06:40:55 crc kubenswrapper[4784]: I0123 06:40:55.086209 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tvgsx" Jan 23 06:40:55 crc kubenswrapper[4784]: I0123 06:40:55.258031 4784 generic.go:334] "Generic (PLEG): container finished" podID="641175ec-bf26-49e2-8f87-a5e4c25ee6a6" containerID="944dacb24b0774ef942394c6c63e230b94461947eb399460b1e0214bab2aa9d5" exitCode=0 Jan 23 06:40:55 crc kubenswrapper[4784]: I0123 06:40:55.283119 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sj5dx-config-ls5k5" event={"ID":"641175ec-bf26-49e2-8f87-a5e4c25ee6a6","Type":"ContainerDied","Data":"944dacb24b0774ef942394c6c63e230b94461947eb399460b1e0214bab2aa9d5"} Jan 23 06:40:55 crc kubenswrapper[4784]: I0123 06:40:55.627104 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tvgsx"] Jan 23 06:40:55 crc kubenswrapper[4784]: I0123 06:40:55.797784 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 06:40:55 crc kubenswrapper[4784]: I0123 06:40:55.797860 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 06:40:55 crc kubenswrapper[4784]: I0123 06:40:55.802058 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 06:40:56 crc kubenswrapper[4784]: I0123 06:40:56.270266 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tvgsx" event={"ID":"86a35ddd-0a33-4ef4-86d1-11c1279b23d7","Type":"ContainerStarted","Data":"1f702d777413a3bf86dda125db8222ae068a517e0c1111e3600770715955c86c"} Jan 23 06:40:56 crc kubenswrapper[4784]: I0123 06:40:56.272376 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 06:40:56 crc kubenswrapper[4784]: I0123 06:40:56.593264 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:56 crc kubenswrapper[4784]: I0123 06:40:56.602715 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/abb5c886-7378-4bdd-b56a-cc803db75cbd-etc-swift\") pod \"swift-storage-0\" (UID: \"abb5c886-7378-4bdd-b56a-cc803db75cbd\") " pod="openstack/swift-storage-0" Jan 23 06:40:56 crc kubenswrapper[4784]: I0123 06:40:56.776031 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.269888 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.315656 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-additional-scripts\") pod \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.315897 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-log-ovn\") pod \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.316113 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whzjm\" (UniqueName: \"kubernetes.io/projected/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-kube-api-access-whzjm\") pod \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.316258 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-scripts\") pod \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.316329 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-run-ovn\") pod \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.316369 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-run\") pod \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\" (UID: \"641175ec-bf26-49e2-8f87-a5e4c25ee6a6\") " Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.317851 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "641175ec-bf26-49e2-8f87-a5e4c25ee6a6" (UID: "641175ec-bf26-49e2-8f87-a5e4c25ee6a6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.317913 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "641175ec-bf26-49e2-8f87-a5e4c25ee6a6" (UID: "641175ec-bf26-49e2-8f87-a5e4c25ee6a6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.317939 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "641175ec-bf26-49e2-8f87-a5e4c25ee6a6" (UID: "641175ec-bf26-49e2-8f87-a5e4c25ee6a6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.318032 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-run" (OuterVolumeSpecName: "var-run") pod "641175ec-bf26-49e2-8f87-a5e4c25ee6a6" (UID: "641175ec-bf26-49e2-8f87-a5e4c25ee6a6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.322392 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-scripts" (OuterVolumeSpecName: "scripts") pod "641175ec-bf26-49e2-8f87-a5e4c25ee6a6" (UID: "641175ec-bf26-49e2-8f87-a5e4c25ee6a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.328829 4784 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.328861 4784 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.328876 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.328889 4784 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.328905 4784 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.343744 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-kube-api-access-whzjm" (OuterVolumeSpecName: "kube-api-access-whzjm") pod "641175ec-bf26-49e2-8f87-a5e4c25ee6a6" (UID: "641175ec-bf26-49e2-8f87-a5e4c25ee6a6"). InnerVolumeSpecName "kube-api-access-whzjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.366733 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sj5dx-config-ls5k5" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.367048 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sj5dx-config-ls5k5" event={"ID":"641175ec-bf26-49e2-8f87-a5e4c25ee6a6","Type":"ContainerDied","Data":"ee3f4fc42dafadb30c941c594cfa3780570bfb1a063b9f1717c52ff80d5d18b0"} Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.367109 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee3f4fc42dafadb30c941c594cfa3780570bfb1a063b9f1717c52ff80d5d18b0" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.367180 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sj5dx" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.433587 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whzjm\" (UniqueName: \"kubernetes.io/projected/641175ec-bf26-49e2-8f87-a5e4c25ee6a6-kube-api-access-whzjm\") on node \"crc\" DevicePath \"\"" Jan 23 06:40:57 crc kubenswrapper[4784]: I0123 06:40:57.796185 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 06:40:58 crc kubenswrapper[4784]: I0123 06:40:58.403208 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"0d25f03ab2cd19301c200b98c7a114af2b37bf58cdb05e1f1eb0804bcc4c563c"} Jan 23 06:40:58 crc kubenswrapper[4784]: I0123 06:40:58.407768 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sj5dx-config-ls5k5"] Jan 23 06:40:58 crc kubenswrapper[4784]: I0123 06:40:58.408239 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tvgsx" event={"ID":"86a35ddd-0a33-4ef4-86d1-11c1279b23d7","Type":"ContainerStarted","Data":"fb78682cb5e78f2ebf2121cc2e05e3064a57e2dd9bdb95b1386a3f5a4be86a68"} Jan 23 06:40:58 crc kubenswrapper[4784]: I0123 06:40:58.428435 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sj5dx-config-ls5k5"] Jan 23 06:40:58 crc kubenswrapper[4784]: I0123 06:40:58.440436 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-tvgsx" podStartSLOduration=4.440408604 podStartE2EDuration="4.440408604s" podCreationTimestamp="2026-01-23 06:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:40:58.427119357 +0000 UTC m=+1261.659627331" watchObservedRunningTime="2026-01-23 06:40:58.440408604 +0000 UTC m=+1261.672916578" Jan 23 06:40:59 crc kubenswrapper[4784]: I0123 06:40:59.265906 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="641175ec-bf26-49e2-8f87-a5e4c25ee6a6" path="/var/lib/kubelet/pods/641175ec-bf26-49e2-8f87-a5e4c25ee6a6/volumes" Jan 23 06:40:59 crc kubenswrapper[4784]: I0123 06:40:59.729213 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:40:59 crc kubenswrapper[4784]: I0123 06:40:59.729726 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="prometheus" containerID="cri-o://9c4821b368a785659445a70da81187ff14092dcaa790f9d0352a62fa1d701929" gracePeriod=600 Jan 23 06:40:59 crc kubenswrapper[4784]: I0123 06:40:59.729873 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="thanos-sidecar" containerID="cri-o://906b02f5a9f8d935791ce9223f86e87324bcd6c42137f30d2298b20445e08329" gracePeriod=600 Jan 23 06:40:59 crc kubenswrapper[4784]: I0123 06:40:59.729890 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="config-reloader" containerID="cri-o://ee4f104cdea26e016b99070d2bb297037661a571a42f20c04ca04a25c8f53e70" gracePeriod=600 Jan 23 06:41:00 crc kubenswrapper[4784]: I0123 06:41:00.433636 4784 generic.go:334] "Generic (PLEG): container finished" podID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerID="906b02f5a9f8d935791ce9223f86e87324bcd6c42137f30d2298b20445e08329" exitCode=0 Jan 23 06:41:00 crc kubenswrapper[4784]: I0123 06:41:00.434178 4784 generic.go:334] "Generic (PLEG): container finished" podID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerID="ee4f104cdea26e016b99070d2bb297037661a571a42f20c04ca04a25c8f53e70" exitCode=0 Jan 23 06:41:00 crc kubenswrapper[4784]: I0123 06:41:00.434193 4784 generic.go:334] "Generic (PLEG): container finished" podID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerID="9c4821b368a785659445a70da81187ff14092dcaa790f9d0352a62fa1d701929" exitCode=0 Jan 23 06:41:00 crc kubenswrapper[4784]: I0123 06:41:00.433724 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"347f59fd-0378-4413-8880-7d7e9fe9a859","Type":"ContainerDied","Data":"906b02f5a9f8d935791ce9223f86e87324bcd6c42137f30d2298b20445e08329"} Jan 23 06:41:00 crc kubenswrapper[4784]: I0123 06:41:00.434283 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"347f59fd-0378-4413-8880-7d7e9fe9a859","Type":"ContainerDied","Data":"ee4f104cdea26e016b99070d2bb297037661a571a42f20c04ca04a25c8f53e70"} Jan 23 06:41:00 crc kubenswrapper[4784]: I0123 06:41:00.434303 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"347f59fd-0378-4413-8880-7d7e9fe9a859","Type":"ContainerDied","Data":"9c4821b368a785659445a70da81187ff14092dcaa790f9d0352a62fa1d701929"} Jan 23 06:41:00 crc kubenswrapper[4784]: I0123 06:41:00.436847 4784 generic.go:334] "Generic (PLEG): container finished" podID="86a35ddd-0a33-4ef4-86d1-11c1279b23d7" containerID="fb78682cb5e78f2ebf2121cc2e05e3064a57e2dd9bdb95b1386a3f5a4be86a68" exitCode=0 Jan 23 06:41:00 crc kubenswrapper[4784]: I0123 06:41:00.436891 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tvgsx" event={"ID":"86a35ddd-0a33-4ef4-86d1-11c1279b23d7","Type":"ContainerDied","Data":"fb78682cb5e78f2ebf2121cc2e05e3064a57e2dd9bdb95b1386a3f5a4be86a68"} Jan 23 06:41:00 crc kubenswrapper[4784]: I0123 06:41:00.798121 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.109:9090/-/ready\": dial tcp 10.217.0.109:9090: connect: connection refused" Jan 23 06:41:04 crc kubenswrapper[4784]: I0123 06:41:03.702705 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 23 06:41:05 crc kubenswrapper[4784]: I0123 06:41:05.799255 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.109:9090/-/ready\": dial tcp 10.217.0.109:9090: connect: connection refused" Jan 23 06:41:06 crc kubenswrapper[4784]: I0123 06:41:06.527407 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tvgsx" event={"ID":"86a35ddd-0a33-4ef4-86d1-11c1279b23d7","Type":"ContainerDied","Data":"1f702d777413a3bf86dda125db8222ae068a517e0c1111e3600770715955c86c"} Jan 23 06:41:06 crc kubenswrapper[4784]: I0123 06:41:06.527893 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f702d777413a3bf86dda125db8222ae068a517e0c1111e3600770715955c86c" Jan 23 06:41:06 crc kubenswrapper[4784]: E0123 06:41:06.622195 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 23 06:41:06 crc kubenswrapper[4784]: E0123 06:41:06.622377 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqtx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-dqv9q_openstack(4a92a258-aeae-45d3-ac60-f5d9033a0e5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:41:06 crc kubenswrapper[4784]: E0123 06:41:06.624900 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-dqv9q" podUID="4a92a258-aeae-45d3-ac60-f5d9033a0e5c" Jan 23 06:41:06 crc kubenswrapper[4784]: I0123 06:41:06.635672 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tvgsx" Jan 23 06:41:06 crc kubenswrapper[4784]: I0123 06:41:06.763710 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bprt\" (UniqueName: \"kubernetes.io/projected/86a35ddd-0a33-4ef4-86d1-11c1279b23d7-kube-api-access-6bprt\") pod \"86a35ddd-0a33-4ef4-86d1-11c1279b23d7\" (UID: \"86a35ddd-0a33-4ef4-86d1-11c1279b23d7\") " Jan 23 06:41:06 crc kubenswrapper[4784]: I0123 06:41:06.763918 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86a35ddd-0a33-4ef4-86d1-11c1279b23d7-operator-scripts\") pod \"86a35ddd-0a33-4ef4-86d1-11c1279b23d7\" (UID: \"86a35ddd-0a33-4ef4-86d1-11c1279b23d7\") " Jan 23 06:41:06 crc kubenswrapper[4784]: I0123 06:41:06.765293 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a35ddd-0a33-4ef4-86d1-11c1279b23d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86a35ddd-0a33-4ef4-86d1-11c1279b23d7" (UID: "86a35ddd-0a33-4ef4-86d1-11c1279b23d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:06 crc kubenswrapper[4784]: I0123 06:41:06.777385 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a35ddd-0a33-4ef4-86d1-11c1279b23d7-kube-api-access-6bprt" (OuterVolumeSpecName: "kube-api-access-6bprt") pod "86a35ddd-0a33-4ef4-86d1-11c1279b23d7" (UID: "86a35ddd-0a33-4ef4-86d1-11c1279b23d7"). InnerVolumeSpecName "kube-api-access-6bprt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:41:06 crc kubenswrapper[4784]: I0123 06:41:06.870251 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bprt\" (UniqueName: \"kubernetes.io/projected/86a35ddd-0a33-4ef4-86d1-11c1279b23d7-kube-api-access-6bprt\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:06 crc kubenswrapper[4784]: I0123 06:41:06.870624 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86a35ddd-0a33-4ef4-86d1-11c1279b23d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.055575 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.075093 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-config\") pod \"347f59fd-0378-4413-8880-7d7e9fe9a859\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.075159 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-web-config\") pod \"347f59fd-0378-4413-8880-7d7e9fe9a859\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.075228 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-1\") pod \"347f59fd-0378-4413-8880-7d7e9fe9a859\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.075279 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/347f59fd-0378-4413-8880-7d7e9fe9a859-tls-assets\") pod \"347f59fd-0378-4413-8880-7d7e9fe9a859\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.075308 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-thanos-prometheus-http-client-file\") pod \"347f59fd-0378-4413-8880-7d7e9fe9a859\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.075480 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") pod \"347f59fd-0378-4413-8880-7d7e9fe9a859\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.075519 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-2\") pod \"347f59fd-0378-4413-8880-7d7e9fe9a859\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.075557 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/347f59fd-0378-4413-8880-7d7e9fe9a859-config-out\") pod \"347f59fd-0378-4413-8880-7d7e9fe9a859\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.075600 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6xcj\" (UniqueName: \"kubernetes.io/projected/347f59fd-0378-4413-8880-7d7e9fe9a859-kube-api-access-l6xcj\") pod \"347f59fd-0378-4413-8880-7d7e9fe9a859\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.076240 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "347f59fd-0378-4413-8880-7d7e9fe9a859" (UID: "347f59fd-0378-4413-8880-7d7e9fe9a859"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.077114 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "347f59fd-0378-4413-8880-7d7e9fe9a859" (UID: "347f59fd-0378-4413-8880-7d7e9fe9a859"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.081855 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347f59fd-0378-4413-8880-7d7e9fe9a859-kube-api-access-l6xcj" (OuterVolumeSpecName: "kube-api-access-l6xcj") pod "347f59fd-0378-4413-8880-7d7e9fe9a859" (UID: "347f59fd-0378-4413-8880-7d7e9fe9a859"). InnerVolumeSpecName "kube-api-access-l6xcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.109941 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347f59fd-0378-4413-8880-7d7e9fe9a859-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "347f59fd-0378-4413-8880-7d7e9fe9a859" (UID: "347f59fd-0378-4413-8880-7d7e9fe9a859"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.112128 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/347f59fd-0378-4413-8880-7d7e9fe9a859-config-out" (OuterVolumeSpecName: "config-out") pod "347f59fd-0378-4413-8880-7d7e9fe9a859" (UID: "347f59fd-0378-4413-8880-7d7e9fe9a859"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.112335 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-config" (OuterVolumeSpecName: "config") pod "347f59fd-0378-4413-8880-7d7e9fe9a859" (UID: "347f59fd-0378-4413-8880-7d7e9fe9a859"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.112504 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "347f59fd-0378-4413-8880-7d7e9fe9a859" (UID: "347f59fd-0378-4413-8880-7d7e9fe9a859"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.113076 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "347f59fd-0378-4413-8880-7d7e9fe9a859" (UID: "347f59fd-0378-4413-8880-7d7e9fe9a859"). InnerVolumeSpecName "pvc-e6192221-140f-46e3-a3e7-14d2acad4265". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.123820 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-web-config" (OuterVolumeSpecName: "web-config") pod "347f59fd-0378-4413-8880-7d7e9fe9a859" (UID: "347f59fd-0378-4413-8880-7d7e9fe9a859"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.177484 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-0\") pod \"347f59fd-0378-4413-8880-7d7e9fe9a859\" (UID: \"347f59fd-0378-4413-8880-7d7e9fe9a859\") " Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.178212 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6xcj\" (UniqueName: \"kubernetes.io/projected/347f59fd-0378-4413-8880-7d7e9fe9a859-kube-api-access-l6xcj\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.178238 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.178254 4784 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-web-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.178270 4784 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.178254 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "347f59fd-0378-4413-8880-7d7e9fe9a859" (UID: "347f59fd-0378-4413-8880-7d7e9fe9a859"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.178284 4784 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/347f59fd-0378-4413-8880-7d7e9fe9a859-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.178374 4784 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/347f59fd-0378-4413-8880-7d7e9fe9a859-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.178429 4784 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") on node \"crc\" " Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.178450 4784 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.178465 4784 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/347f59fd-0378-4413-8880-7d7e9fe9a859-config-out\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.204931 4784 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.205215 4784 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e6192221-140f-46e3-a3e7-14d2acad4265" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265") on node "crc" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.308194 4784 reconciler_common.go:293] "Volume detached for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.308295 4784 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/347f59fd-0378-4413-8880-7d7e9fe9a859-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.542160 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tvgsx" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.542191 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"347f59fd-0378-4413-8880-7d7e9fe9a859","Type":"ContainerDied","Data":"c0e5ec172879fa36199cac02ae495b5dee2afc0ad356cf7a08b8e13f7d2aa98d"} Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.542166 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.542491 4784 scope.go:117] "RemoveContainer" containerID="906b02f5a9f8d935791ce9223f86e87324bcd6c42137f30d2298b20445e08329" Jan 23 06:41:07 crc kubenswrapper[4784]: E0123 06:41:07.547057 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-dqv9q" podUID="4a92a258-aeae-45d3-ac60-f5d9033a0e5c" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.606197 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.615223 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.644064 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:41:07 crc kubenswrapper[4784]: E0123 06:41:07.644608 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="thanos-sidecar" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.644630 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="thanos-sidecar" Jan 23 06:41:07 crc kubenswrapper[4784]: E0123 06:41:07.644649 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="config-reloader" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.644656 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="config-reloader" Jan 23 06:41:07 crc kubenswrapper[4784]: E0123 06:41:07.644670 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="init-config-reloader" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.644678 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="init-config-reloader" Jan 23 06:41:07 crc kubenswrapper[4784]: E0123 06:41:07.644697 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="641175ec-bf26-49e2-8f87-a5e4c25ee6a6" containerName="ovn-config" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.644703 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="641175ec-bf26-49e2-8f87-a5e4c25ee6a6" containerName="ovn-config" Jan 23 06:41:07 crc kubenswrapper[4784]: E0123 06:41:07.644718 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a35ddd-0a33-4ef4-86d1-11c1279b23d7" containerName="mariadb-account-create-update" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.644724 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a35ddd-0a33-4ef4-86d1-11c1279b23d7" containerName="mariadb-account-create-update" Jan 23 06:41:07 crc kubenswrapper[4784]: E0123 06:41:07.644743 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="prometheus" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.644754 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="prometheus" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.644962 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="thanos-sidecar" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.644982 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="config-reloader" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.644997 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="641175ec-bf26-49e2-8f87-a5e4c25ee6a6" containerName="ovn-config" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.645007 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" containerName="prometheus" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.645017 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="86a35ddd-0a33-4ef4-86d1-11c1279b23d7" containerName="mariadb-account-create-update" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.647221 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.656057 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.656590 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bvsrx" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.657082 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.657462 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.657666 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.657903 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.660863 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.661118 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.665192 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.696021 4784 scope.go:117] "RemoveContainer" containerID="ee4f104cdea26e016b99070d2bb297037661a571a42f20c04ca04a25c8f53e70" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.699767 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.757212 4784 scope.go:117] "RemoveContainer" containerID="9c4821b368a785659445a70da81187ff14092dcaa790f9d0352a62fa1d701929" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.819458 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.819542 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc6kd\" (UniqueName: \"kubernetes.io/projected/3e974f78-4c17-480b-8a35-285a89f1cb35-kube-api-access-vc6kd\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.819581 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.819612 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e974f78-4c17-480b-8a35-285a89f1cb35-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.819679 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.819722 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.819816 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.819857 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.819894 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e974f78-4c17-480b-8a35-285a89f1cb35-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.819920 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.819949 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.819987 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.820029 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.897246 4784 scope.go:117] "RemoveContainer" containerID="1a9ef858ff7f17c95f4c419002b58ab8c828215592da098030eb43780983b0ac" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.922445 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.922543 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.922584 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.922627 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.922670 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e974f78-4c17-480b-8a35-285a89f1cb35-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.922695 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.923062 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.923630 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.923737 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.924145 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.924356 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.924432 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc6kd\" (UniqueName: \"kubernetes.io/projected/3e974f78-4c17-480b-8a35-285a89f1cb35-kube-api-access-vc6kd\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.924473 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.924507 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e974f78-4c17-480b-8a35-285a89f1cb35-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.927211 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e974f78-4c17-480b-8a35-285a89f1cb35-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.927855 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.927992 4784 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.928023 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/984fdf672f705a078d51f1b73c390067f647610423a2c84302a50834be3d8ee1/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.928461 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.934402 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.934613 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.934659 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.937417 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.937637 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e974f78-4c17-480b-8a35-285a89f1cb35-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.939486 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.942591 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.948284 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc6kd\" (UniqueName: \"kubernetes.io/projected/3e974f78-4c17-480b-8a35-285a89f1cb35-kube-api-access-vc6kd\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:07 crc kubenswrapper[4784]: I0123 06:41:07.976588 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") pod \"prometheus-metric-storage-0\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.278124 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.337002 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.571382 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"f9f0e0396718b152feb5fbc6319a7cd3adb86b731da01af55e6a174e1c9fdb22"} Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.571456 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"9c2123487d9c849e189a29b0fb02f8d3bfb6acfe578c9586488f3759ebd64549"} Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.571477 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"28081da8fc5a6beb546fb74123ce96a0602de40ece05e54694bf3b9093b8df10"} Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.838334 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-bkcjh"] Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.839891 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.845487 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-4htjj" Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.846119 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.873361 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-bkcjh"] Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.893989 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.973080 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-2g628"] Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.975545 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2g628" Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.978905 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-config-data\") pod \"watcher-db-sync-bkcjh\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.979106 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldrjk\" (UniqueName: \"kubernetes.io/projected/ada74437-66bf-4316-a16d-89377a5b5e41-kube-api-access-ldrjk\") pod \"watcher-db-sync-bkcjh\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.979228 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-db-sync-config-data\") pod \"watcher-db-sync-bkcjh\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:41:08 crc kubenswrapper[4784]: I0123 06:41:08.979348 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-combined-ca-bundle\") pod \"watcher-db-sync-bkcjh\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.018244 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2g628"] Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.047528 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.081247 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-config-data\") pod \"watcher-db-sync-bkcjh\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.081305 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5a0bd14-68e7-4973-ad97-42f2238300f5-operator-scripts\") pod \"barbican-db-create-2g628\" (UID: \"a5a0bd14-68e7-4973-ad97-42f2238300f5\") " pod="openstack/barbican-db-create-2g628" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.081369 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldrjk\" (UniqueName: \"kubernetes.io/projected/ada74437-66bf-4316-a16d-89377a5b5e41-kube-api-access-ldrjk\") pod \"watcher-db-sync-bkcjh\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.081431 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-db-sync-config-data\") pod \"watcher-db-sync-bkcjh\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.081506 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpwc4\" (UniqueName: \"kubernetes.io/projected/a5a0bd14-68e7-4973-ad97-42f2238300f5-kube-api-access-rpwc4\") pod \"barbican-db-create-2g628\" (UID: \"a5a0bd14-68e7-4973-ad97-42f2238300f5\") " pod="openstack/barbican-db-create-2g628" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.081544 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-combined-ca-bundle\") pod \"watcher-db-sync-bkcjh\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.094855 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-combined-ca-bundle\") pod \"watcher-db-sync-bkcjh\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.111611 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-config-data\") pod \"watcher-db-sync-bkcjh\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.115335 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-db-sync-config-data\") pod \"watcher-db-sync-bkcjh\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.170680 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldrjk\" (UniqueName: \"kubernetes.io/projected/ada74437-66bf-4316-a16d-89377a5b5e41-kube-api-access-ldrjk\") pod \"watcher-db-sync-bkcjh\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.184295 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5a0bd14-68e7-4973-ad97-42f2238300f5-operator-scripts\") pod \"barbican-db-create-2g628\" (UID: \"a5a0bd14-68e7-4973-ad97-42f2238300f5\") " pod="openstack/barbican-db-create-2g628" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.184631 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpwc4\" (UniqueName: \"kubernetes.io/projected/a5a0bd14-68e7-4973-ad97-42f2238300f5-kube-api-access-rpwc4\") pod \"barbican-db-create-2g628\" (UID: \"a5a0bd14-68e7-4973-ad97-42f2238300f5\") " pod="openstack/barbican-db-create-2g628" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.190350 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5a0bd14-68e7-4973-ad97-42f2238300f5-operator-scripts\") pod \"barbican-db-create-2g628\" (UID: \"a5a0bd14-68e7-4973-ad97-42f2238300f5\") " pod="openstack/barbican-db-create-2g628" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.219193 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.262326 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpwc4\" (UniqueName: \"kubernetes.io/projected/a5a0bd14-68e7-4973-ad97-42f2238300f5-kube-api-access-rpwc4\") pod \"barbican-db-create-2g628\" (UID: \"a5a0bd14-68e7-4973-ad97-42f2238300f5\") " pod="openstack/barbican-db-create-2g628" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.283899 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347f59fd-0378-4413-8880-7d7e9fe9a859" path="/var/lib/kubelet/pods/347f59fd-0378-4413-8880-7d7e9fe9a859/volumes" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.297108 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-f5snt"] Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.298615 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f5snt" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.304579 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2g628" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.329414 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a197-account-create-update-q6j44"] Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.342397 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a197-account-create-update-q6j44" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.361490 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.368071 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f5snt"] Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.392918 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b44b992-71dd-4aa8-aad0-9b323d47e8fb-operator-scripts\") pod \"cinder-a197-account-create-update-q6j44\" (UID: \"0b44b992-71dd-4aa8-aad0-9b323d47e8fb\") " pod="openstack/cinder-a197-account-create-update-q6j44" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.393020 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d6bfb06-c97c-4f8d-8da9-ba12f6640bad-operator-scripts\") pod \"cinder-db-create-f5snt\" (UID: \"8d6bfb06-c97c-4f8d-8da9-ba12f6640bad\") " pod="openstack/cinder-db-create-f5snt" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.393117 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mx67\" (UniqueName: \"kubernetes.io/projected/0b44b992-71dd-4aa8-aad0-9b323d47e8fb-kube-api-access-4mx67\") pod \"cinder-a197-account-create-update-q6j44\" (UID: \"0b44b992-71dd-4aa8-aad0-9b323d47e8fb\") " pod="openstack/cinder-a197-account-create-update-q6j44" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.393191 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkl7c\" (UniqueName: \"kubernetes.io/projected/8d6bfb06-c97c-4f8d-8da9-ba12f6640bad-kube-api-access-vkl7c\") pod \"cinder-db-create-f5snt\" (UID: \"8d6bfb06-c97c-4f8d-8da9-ba12f6640bad\") " pod="openstack/cinder-db-create-f5snt" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.406573 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a197-account-create-update-q6j44"] Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.496358 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b44b992-71dd-4aa8-aad0-9b323d47e8fb-operator-scripts\") pod \"cinder-a197-account-create-update-q6j44\" (UID: \"0b44b992-71dd-4aa8-aad0-9b323d47e8fb\") " pod="openstack/cinder-a197-account-create-update-q6j44" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.496428 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d6bfb06-c97c-4f8d-8da9-ba12f6640bad-operator-scripts\") pod \"cinder-db-create-f5snt\" (UID: \"8d6bfb06-c97c-4f8d-8da9-ba12f6640bad\") " pod="openstack/cinder-db-create-f5snt" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.496484 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mx67\" (UniqueName: \"kubernetes.io/projected/0b44b992-71dd-4aa8-aad0-9b323d47e8fb-kube-api-access-4mx67\") pod \"cinder-a197-account-create-update-q6j44\" (UID: \"0b44b992-71dd-4aa8-aad0-9b323d47e8fb\") " pod="openstack/cinder-a197-account-create-update-q6j44" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.496528 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkl7c\" (UniqueName: \"kubernetes.io/projected/8d6bfb06-c97c-4f8d-8da9-ba12f6640bad-kube-api-access-vkl7c\") pod \"cinder-db-create-f5snt\" (UID: \"8d6bfb06-c97c-4f8d-8da9-ba12f6640bad\") " pod="openstack/cinder-db-create-f5snt" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.502061 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b44b992-71dd-4aa8-aad0-9b323d47e8fb-operator-scripts\") pod \"cinder-a197-account-create-update-q6j44\" (UID: \"0b44b992-71dd-4aa8-aad0-9b323d47e8fb\") " pod="openstack/cinder-a197-account-create-update-q6j44" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.502713 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d6bfb06-c97c-4f8d-8da9-ba12f6640bad-operator-scripts\") pod \"cinder-db-create-f5snt\" (UID: \"8d6bfb06-c97c-4f8d-8da9-ba12f6640bad\") " pod="openstack/cinder-db-create-f5snt" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.523279 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-fsq8w"] Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.524903 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fsq8w" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.540490 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.540817 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2zq2z" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.540947 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.541064 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.577646 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mx67\" (UniqueName: \"kubernetes.io/projected/0b44b992-71dd-4aa8-aad0-9b323d47e8fb-kube-api-access-4mx67\") pod \"cinder-a197-account-create-update-q6j44\" (UID: \"0b44b992-71dd-4aa8-aad0-9b323d47e8fb\") " pod="openstack/cinder-a197-account-create-update-q6j44" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.578349 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkl7c\" (UniqueName: \"kubernetes.io/projected/8d6bfb06-c97c-4f8d-8da9-ba12f6640bad-kube-api-access-vkl7c\") pod \"cinder-db-create-f5snt\" (UID: \"8d6bfb06-c97c-4f8d-8da9-ba12f6640bad\") " pod="openstack/cinder-db-create-f5snt" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.594108 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fsq8w"] Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.623150 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/355a352a-3ae0-4db7-9a25-3588f4233973-combined-ca-bundle\") pod \"keystone-db-sync-fsq8w\" (UID: \"355a352a-3ae0-4db7-9a25-3588f4233973\") " pod="openstack/keystone-db-sync-fsq8w" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.623223 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/355a352a-3ae0-4db7-9a25-3588f4233973-config-data\") pod \"keystone-db-sync-fsq8w\" (UID: \"355a352a-3ae0-4db7-9a25-3588f4233973\") " pod="openstack/keystone-db-sync-fsq8w" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.623291 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g95zz\" (UniqueName: \"kubernetes.io/projected/355a352a-3ae0-4db7-9a25-3588f4233973-kube-api-access-g95zz\") pod \"keystone-db-sync-fsq8w\" (UID: \"355a352a-3ae0-4db7-9a25-3588f4233973\") " pod="openstack/keystone-db-sync-fsq8w" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.645208 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f5snt" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.654928 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-48fe-account-create-update-cvjrf"] Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.659561 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e974f78-4c17-480b-8a35-285a89f1cb35","Type":"ContainerStarted","Data":"cd5cf7cffe80c6f4a5d73d06eb17dac1188c4dba2be507956617f40abe0b2abf"} Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.659691 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-48fe-account-create-update-cvjrf" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.671079 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.694594 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a197-account-create-update-q6j44" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.702208 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"b62bde5159fe83b656fbeb9d70fd308dd0a75244a76fcacc7816e2b50d3165fa"} Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.727986 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnlcm\" (UniqueName: \"kubernetes.io/projected/ba896b3e-c197-41d5-b182-f17f508d32b7-kube-api-access-qnlcm\") pod \"barbican-48fe-account-create-update-cvjrf\" (UID: \"ba896b3e-c197-41d5-b182-f17f508d32b7\") " pod="openstack/barbican-48fe-account-create-update-cvjrf" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.728101 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/355a352a-3ae0-4db7-9a25-3588f4233973-combined-ca-bundle\") pod \"keystone-db-sync-fsq8w\" (UID: \"355a352a-3ae0-4db7-9a25-3588f4233973\") " pod="openstack/keystone-db-sync-fsq8w" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.728137 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/355a352a-3ae0-4db7-9a25-3588f4233973-config-data\") pod \"keystone-db-sync-fsq8w\" (UID: \"355a352a-3ae0-4db7-9a25-3588f4233973\") " pod="openstack/keystone-db-sync-fsq8w" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.728176 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba896b3e-c197-41d5-b182-f17f508d32b7-operator-scripts\") pod \"barbican-48fe-account-create-update-cvjrf\" (UID: \"ba896b3e-c197-41d5-b182-f17f508d32b7\") " pod="openstack/barbican-48fe-account-create-update-cvjrf" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.728226 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g95zz\" (UniqueName: \"kubernetes.io/projected/355a352a-3ae0-4db7-9a25-3588f4233973-kube-api-access-g95zz\") pod \"keystone-db-sync-fsq8w\" (UID: \"355a352a-3ae0-4db7-9a25-3588f4233973\") " pod="openstack/keystone-db-sync-fsq8w" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.752030 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/355a352a-3ae0-4db7-9a25-3588f4233973-config-data\") pod \"keystone-db-sync-fsq8w\" (UID: \"355a352a-3ae0-4db7-9a25-3588f4233973\") " pod="openstack/keystone-db-sync-fsq8w" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.763801 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g95zz\" (UniqueName: \"kubernetes.io/projected/355a352a-3ae0-4db7-9a25-3588f4233973-kube-api-access-g95zz\") pod \"keystone-db-sync-fsq8w\" (UID: \"355a352a-3ae0-4db7-9a25-3588f4233973\") " pod="openstack/keystone-db-sync-fsq8w" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.768281 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/355a352a-3ae0-4db7-9a25-3588f4233973-combined-ca-bundle\") pod \"keystone-db-sync-fsq8w\" (UID: \"355a352a-3ae0-4db7-9a25-3588f4233973\") " pod="openstack/keystone-db-sync-fsq8w" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.789080 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-48fe-account-create-update-cvjrf"] Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.831134 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnlcm\" (UniqueName: \"kubernetes.io/projected/ba896b3e-c197-41d5-b182-f17f508d32b7-kube-api-access-qnlcm\") pod \"barbican-48fe-account-create-update-cvjrf\" (UID: \"ba896b3e-c197-41d5-b182-f17f508d32b7\") " pod="openstack/barbican-48fe-account-create-update-cvjrf" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.831258 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba896b3e-c197-41d5-b182-f17f508d32b7-operator-scripts\") pod \"barbican-48fe-account-create-update-cvjrf\" (UID: \"ba896b3e-c197-41d5-b182-f17f508d32b7\") " pod="openstack/barbican-48fe-account-create-update-cvjrf" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.832420 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba896b3e-c197-41d5-b182-f17f508d32b7-operator-scripts\") pod \"barbican-48fe-account-create-update-cvjrf\" (UID: \"ba896b3e-c197-41d5-b182-f17f508d32b7\") " pod="openstack/barbican-48fe-account-create-update-cvjrf" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.865941 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fsq8w" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.871330 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-r2prw"] Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.872900 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-r2prw" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.877262 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-r2prw"] Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.884965 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c72e-account-create-update-flvx5"] Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.886260 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c72e-account-create-update-flvx5" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.892656 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnlcm\" (UniqueName: \"kubernetes.io/projected/ba896b3e-c197-41d5-b182-f17f508d32b7-kube-api-access-qnlcm\") pod \"barbican-48fe-account-create-update-cvjrf\" (UID: \"ba896b3e-c197-41d5-b182-f17f508d32b7\") " pod="openstack/barbican-48fe-account-create-update-cvjrf" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.926100 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.947819 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b439ced3-cccc-44d7-b249-a37d3505df26-operator-scripts\") pod \"neutron-db-create-r2prw\" (UID: \"b439ced3-cccc-44d7-b249-a37d3505df26\") " pod="openstack/neutron-db-create-r2prw" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.948592 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnnvw\" (UniqueName: \"kubernetes.io/projected/b439ced3-cccc-44d7-b249-a37d3505df26-kube-api-access-qnnvw\") pod \"neutron-db-create-r2prw\" (UID: \"b439ced3-cccc-44d7-b249-a37d3505df26\") " pod="openstack/neutron-db-create-r2prw" Jan 23 06:41:09 crc kubenswrapper[4784]: I0123 06:41:09.985107 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c72e-account-create-update-flvx5"] Jan 23 06:41:10 crc kubenswrapper[4784]: I0123 06:41:10.007021 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-48fe-account-create-update-cvjrf" Jan 23 06:41:10 crc kubenswrapper[4784]: I0123 06:41:10.082334 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b439ced3-cccc-44d7-b249-a37d3505df26-operator-scripts\") pod \"neutron-db-create-r2prw\" (UID: \"b439ced3-cccc-44d7-b249-a37d3505df26\") " pod="openstack/neutron-db-create-r2prw" Jan 23 06:41:10 crc kubenswrapper[4784]: I0123 06:41:10.106341 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwz5k\" (UniqueName: \"kubernetes.io/projected/1df9e961-9c7f-49bc-aae3-018a4850e116-kube-api-access-rwz5k\") pod \"neutron-c72e-account-create-update-flvx5\" (UID: \"1df9e961-9c7f-49bc-aae3-018a4850e116\") " pod="openstack/neutron-c72e-account-create-update-flvx5" Jan 23 06:41:10 crc kubenswrapper[4784]: I0123 06:41:10.110614 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnnvw\" (UniqueName: \"kubernetes.io/projected/b439ced3-cccc-44d7-b249-a37d3505df26-kube-api-access-qnnvw\") pod \"neutron-db-create-r2prw\" (UID: \"b439ced3-cccc-44d7-b249-a37d3505df26\") " pod="openstack/neutron-db-create-r2prw" Jan 23 06:41:10 crc kubenswrapper[4784]: I0123 06:41:10.090922 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b439ced3-cccc-44d7-b249-a37d3505df26-operator-scripts\") pod \"neutron-db-create-r2prw\" (UID: \"b439ced3-cccc-44d7-b249-a37d3505df26\") " pod="openstack/neutron-db-create-r2prw" Jan 23 06:41:10 crc kubenswrapper[4784]: I0123 06:41:10.110816 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df9e961-9c7f-49bc-aae3-018a4850e116-operator-scripts\") pod \"neutron-c72e-account-create-update-flvx5\" (UID: \"1df9e961-9c7f-49bc-aae3-018a4850e116\") " pod="openstack/neutron-c72e-account-create-update-flvx5" Jan 23 06:41:10 crc kubenswrapper[4784]: I0123 06:41:10.187996 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnnvw\" (UniqueName: \"kubernetes.io/projected/b439ced3-cccc-44d7-b249-a37d3505df26-kube-api-access-qnnvw\") pod \"neutron-db-create-r2prw\" (UID: \"b439ced3-cccc-44d7-b249-a37d3505df26\") " pod="openstack/neutron-db-create-r2prw" Jan 23 06:41:10 crc kubenswrapper[4784]: I0123 06:41:10.219366 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwz5k\" (UniqueName: \"kubernetes.io/projected/1df9e961-9c7f-49bc-aae3-018a4850e116-kube-api-access-rwz5k\") pod \"neutron-c72e-account-create-update-flvx5\" (UID: \"1df9e961-9c7f-49bc-aae3-018a4850e116\") " pod="openstack/neutron-c72e-account-create-update-flvx5" Jan 23 06:41:10 crc kubenswrapper[4784]: I0123 06:41:10.219571 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df9e961-9c7f-49bc-aae3-018a4850e116-operator-scripts\") pod \"neutron-c72e-account-create-update-flvx5\" (UID: \"1df9e961-9c7f-49bc-aae3-018a4850e116\") " pod="openstack/neutron-c72e-account-create-update-flvx5" Jan 23 06:41:10 crc kubenswrapper[4784]: I0123 06:41:10.220631 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df9e961-9c7f-49bc-aae3-018a4850e116-operator-scripts\") pod \"neutron-c72e-account-create-update-flvx5\" (UID: \"1df9e961-9c7f-49bc-aae3-018a4850e116\") " pod="openstack/neutron-c72e-account-create-update-flvx5" Jan 23 06:41:10 crc kubenswrapper[4784]: I0123 06:41:10.221340 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-r2prw" Jan 23 06:41:10 crc kubenswrapper[4784]: I0123 06:41:10.266994 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwz5k\" (UniqueName: \"kubernetes.io/projected/1df9e961-9c7f-49bc-aae3-018a4850e116-kube-api-access-rwz5k\") pod \"neutron-c72e-account-create-update-flvx5\" (UID: \"1df9e961-9c7f-49bc-aae3-018a4850e116\") " pod="openstack/neutron-c72e-account-create-update-flvx5" Jan 23 06:41:10 crc kubenswrapper[4784]: I0123 06:41:10.548449 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c72e-account-create-update-flvx5" Jan 23 06:41:10 crc kubenswrapper[4784]: I0123 06:41:10.614520 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-bkcjh"] Jan 23 06:41:10 crc kubenswrapper[4784]: I0123 06:41:10.911052 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2g628"] Jan 23 06:41:11 crc kubenswrapper[4784]: I0123 06:41:11.028849 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a197-account-create-update-q6j44"] Jan 23 06:41:11 crc kubenswrapper[4784]: I0123 06:41:11.047238 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fsq8w"] Jan 23 06:41:11 crc kubenswrapper[4784]: I0123 06:41:11.059120 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f5snt"] Jan 23 06:41:11 crc kubenswrapper[4784]: I0123 06:41:11.081234 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-r2prw"] Jan 23 06:41:11 crc kubenswrapper[4784]: W0123 06:41:11.115691 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podada74437_66bf_4316_a16d_89377a5b5e41.slice/crio-63f381dbc2c33f81a0258f38e5517cb7cd24482284d4daad5e581ab5ec6fe265 WatchSource:0}: Error finding container 63f381dbc2c33f81a0258f38e5517cb7cd24482284d4daad5e581ab5ec6fe265: Status 404 returned error can't find the container with id 63f381dbc2c33f81a0258f38e5517cb7cd24482284d4daad5e581ab5ec6fe265 Jan 23 06:41:11 crc kubenswrapper[4784]: W0123 06:41:11.119538 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5a0bd14_68e7_4973_ad97_42f2238300f5.slice/crio-62b54d53f14c655486c0f15bd2b0cb1a4ecde5aa0fb7b0515d376df1586a34e5 WatchSource:0}: Error finding container 62b54d53f14c655486c0f15bd2b0cb1a4ecde5aa0fb7b0515d376df1586a34e5: Status 404 returned error can't find the container with id 62b54d53f14c655486c0f15bd2b0cb1a4ecde5aa0fb7b0515d376df1586a34e5 Jan 23 06:41:11 crc kubenswrapper[4784]: W0123 06:41:11.127065 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b44b992_71dd_4aa8_aad0_9b323d47e8fb.slice/crio-ca1174c4e8e36d4819cb1faa238745cf1426156b68e4aa944eb6cfa290d89879 WatchSource:0}: Error finding container ca1174c4e8e36d4819cb1faa238745cf1426156b68e4aa944eb6cfa290d89879: Status 404 returned error can't find the container with id ca1174c4e8e36d4819cb1faa238745cf1426156b68e4aa944eb6cfa290d89879 Jan 23 06:41:11 crc kubenswrapper[4784]: W0123 06:41:11.140617 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d6bfb06_c97c_4f8d_8da9_ba12f6640bad.slice/crio-06e25402827ea6ae36a6711f73c33ab08647c43d730bd1eccf31ca8df73cdfe7 WatchSource:0}: Error finding container 06e25402827ea6ae36a6711f73c33ab08647c43d730bd1eccf31ca8df73cdfe7: Status 404 returned error can't find the container with id 06e25402827ea6ae36a6711f73c33ab08647c43d730bd1eccf31ca8df73cdfe7 Jan 23 06:41:11 crc kubenswrapper[4784]: I0123 06:41:11.283560 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-48fe-account-create-update-cvjrf"] Jan 23 06:41:11 crc kubenswrapper[4784]: I0123 06:41:11.766081 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-r2prw" event={"ID":"b439ced3-cccc-44d7-b249-a37d3505df26","Type":"ContainerStarted","Data":"2c55c0af6b3bfcd0b16b1169892012d9b675cdebb6b29435c343d853178dc58e"} Jan 23 06:41:11 crc kubenswrapper[4784]: I0123 06:41:11.772725 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-bkcjh" event={"ID":"ada74437-66bf-4316-a16d-89377a5b5e41","Type":"ContainerStarted","Data":"63f381dbc2c33f81a0258f38e5517cb7cd24482284d4daad5e581ab5ec6fe265"} Jan 23 06:41:11 crc kubenswrapper[4784]: I0123 06:41:11.790194 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2g628" event={"ID":"a5a0bd14-68e7-4973-ad97-42f2238300f5","Type":"ContainerStarted","Data":"62b54d53f14c655486c0f15bd2b0cb1a4ecde5aa0fb7b0515d376df1586a34e5"} Jan 23 06:41:11 crc kubenswrapper[4784]: I0123 06:41:11.794105 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f5snt" event={"ID":"8d6bfb06-c97c-4f8d-8da9-ba12f6640bad","Type":"ContainerStarted","Data":"06e25402827ea6ae36a6711f73c33ab08647c43d730bd1eccf31ca8df73cdfe7"} Jan 23 06:41:11 crc kubenswrapper[4784]: I0123 06:41:11.799852 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a197-account-create-update-q6j44" event={"ID":"0b44b992-71dd-4aa8-aad0-9b323d47e8fb","Type":"ContainerStarted","Data":"ca1174c4e8e36d4819cb1faa238745cf1426156b68e4aa944eb6cfa290d89879"} Jan 23 06:41:11 crc kubenswrapper[4784]: I0123 06:41:11.803833 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fsq8w" event={"ID":"355a352a-3ae0-4db7-9a25-3588f4233973","Type":"ContainerStarted","Data":"030cbcc1bf108c4e929ef1f92e06a2c4c0ea4fbb584be7c056e2c0c62a6e88ae"} Jan 23 06:41:11 crc kubenswrapper[4784]: I0123 06:41:11.820927 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c72e-account-create-update-flvx5"] Jan 23 06:41:12 crc kubenswrapper[4784]: W0123 06:41:12.120975 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba896b3e_c197_41d5_b182_f17f508d32b7.slice/crio-08f6538926d5511dd705e6c41b38e3aaba2f1398f0717d849609fe6deb9c7576 WatchSource:0}: Error finding container 08f6538926d5511dd705e6c41b38e3aaba2f1398f0717d849609fe6deb9c7576: Status 404 returned error can't find the container with id 08f6538926d5511dd705e6c41b38e3aaba2f1398f0717d849609fe6deb9c7576 Jan 23 06:41:12 crc kubenswrapper[4784]: W0123 06:41:12.135697 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1df9e961_9c7f_49bc_aae3_018a4850e116.slice/crio-40b7ebf7bad6a10366b6ba3f19fedc8a572abf0f4c96be015d745034c07c2713 WatchSource:0}: Error finding container 40b7ebf7bad6a10366b6ba3f19fedc8a572abf0f4c96be015d745034c07c2713: Status 404 returned error can't find the container with id 40b7ebf7bad6a10366b6ba3f19fedc8a572abf0f4c96be015d745034c07c2713 Jan 23 06:41:12 crc kubenswrapper[4784]: I0123 06:41:12.819511 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c72e-account-create-update-flvx5" event={"ID":"1df9e961-9c7f-49bc-aae3-018a4850e116","Type":"ContainerStarted","Data":"40b7ebf7bad6a10366b6ba3f19fedc8a572abf0f4c96be015d745034c07c2713"} Jan 23 06:41:12 crc kubenswrapper[4784]: I0123 06:41:12.821853 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-48fe-account-create-update-cvjrf" event={"ID":"ba896b3e-c197-41d5-b182-f17f508d32b7","Type":"ContainerStarted","Data":"08f6538926d5511dd705e6c41b38e3aaba2f1398f0717d849609fe6deb9c7576"} Jan 23 06:41:13 crc kubenswrapper[4784]: I0123 06:41:13.845901 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-48fe-account-create-update-cvjrf" event={"ID":"ba896b3e-c197-41d5-b182-f17f508d32b7","Type":"ContainerStarted","Data":"2108edd466edd8637d61fa8a9ba8630f95fef7790fe21b3a047d8479558cfe3d"} Jan 23 06:41:13 crc kubenswrapper[4784]: I0123 06:41:13.856551 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e974f78-4c17-480b-8a35-285a89f1cb35","Type":"ContainerStarted","Data":"2444e6e56e66e69329ca6d890998e8774bd28f660539aa049c86704a170fe184"} Jan 23 06:41:13 crc kubenswrapper[4784]: I0123 06:41:13.875486 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-48fe-account-create-update-cvjrf" podStartSLOduration=4.875455039 podStartE2EDuration="4.875455039s" podCreationTimestamp="2026-01-23 06:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:41:13.866699203 +0000 UTC m=+1277.099207177" watchObservedRunningTime="2026-01-23 06:41:13.875455039 +0000 UTC m=+1277.107963013" Jan 23 06:41:13 crc kubenswrapper[4784]: I0123 06:41:13.880566 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"ef733261bf1bcc6217fd62a2c8b458366db262f137419cc5a8dee80a18028b2f"} Jan 23 06:41:13 crc kubenswrapper[4784]: I0123 06:41:13.881055 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"75b59d46990150e43502670ada4a1f43dae36464db19cbcfd956269ac68a24e3"} Jan 23 06:41:13 crc kubenswrapper[4784]: I0123 06:41:13.884706 4784 generic.go:334] "Generic (PLEG): container finished" podID="a5a0bd14-68e7-4973-ad97-42f2238300f5" containerID="1de34ec5b1a10fdd82f326a8ba23bfb67713045202eb7f2b12ceb3f157decba8" exitCode=0 Jan 23 06:41:13 crc kubenswrapper[4784]: I0123 06:41:13.884936 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2g628" event={"ID":"a5a0bd14-68e7-4973-ad97-42f2238300f5","Type":"ContainerDied","Data":"1de34ec5b1a10fdd82f326a8ba23bfb67713045202eb7f2b12ceb3f157decba8"} Jan 23 06:41:13 crc kubenswrapper[4784]: I0123 06:41:13.889192 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c72e-account-create-update-flvx5" event={"ID":"1df9e961-9c7f-49bc-aae3-018a4850e116","Type":"ContainerStarted","Data":"c1ea2f099a7af139001266c2131d65ee81baada09f224f2a0c0353a50b36daee"} Jan 23 06:41:13 crc kubenswrapper[4784]: I0123 06:41:13.893047 4784 generic.go:334] "Generic (PLEG): container finished" podID="8d6bfb06-c97c-4f8d-8da9-ba12f6640bad" containerID="1ebb00c7a08cc26d903c925f480fe8c208ade687bfc268ecd5f437818d169b45" exitCode=0 Jan 23 06:41:13 crc kubenswrapper[4784]: I0123 06:41:13.893149 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f5snt" event={"ID":"8d6bfb06-c97c-4f8d-8da9-ba12f6640bad","Type":"ContainerDied","Data":"1ebb00c7a08cc26d903c925f480fe8c208ade687bfc268ecd5f437818d169b45"} Jan 23 06:41:13 crc kubenswrapper[4784]: I0123 06:41:13.898761 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a197-account-create-update-q6j44" event={"ID":"0b44b992-71dd-4aa8-aad0-9b323d47e8fb","Type":"ContainerStarted","Data":"b75b3e7bf5e6f4341b69a679701df366fecb241624bf2b744ba9a37364cc410e"} Jan 23 06:41:13 crc kubenswrapper[4784]: I0123 06:41:13.902996 4784 generic.go:334] "Generic (PLEG): container finished" podID="b439ced3-cccc-44d7-b249-a37d3505df26" containerID="efb4745b9899f5ed787d59bf4b3116a114fa21aa0e3351f3f7155317e4d54306" exitCode=0 Jan 23 06:41:13 crc kubenswrapper[4784]: I0123 06:41:13.903064 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-r2prw" event={"ID":"b439ced3-cccc-44d7-b249-a37d3505df26","Type":"ContainerDied","Data":"efb4745b9899f5ed787d59bf4b3116a114fa21aa0e3351f3f7155317e4d54306"} Jan 23 06:41:14 crc kubenswrapper[4784]: I0123 06:41:14.035767 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c72e-account-create-update-flvx5" podStartSLOduration=5.035709408 podStartE2EDuration="5.035709408s" podCreationTimestamp="2026-01-23 06:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:41:14.02397785 +0000 UTC m=+1277.256485834" watchObservedRunningTime="2026-01-23 06:41:14.035709408 +0000 UTC m=+1277.268217392" Jan 23 06:41:14 crc kubenswrapper[4784]: I0123 06:41:14.927835 4784 generic.go:334] "Generic (PLEG): container finished" podID="1df9e961-9c7f-49bc-aae3-018a4850e116" containerID="c1ea2f099a7af139001266c2131d65ee81baada09f224f2a0c0353a50b36daee" exitCode=0 Jan 23 06:41:14 crc kubenswrapper[4784]: I0123 06:41:14.928012 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c72e-account-create-update-flvx5" event={"ID":"1df9e961-9c7f-49bc-aae3-018a4850e116","Type":"ContainerDied","Data":"c1ea2f099a7af139001266c2131d65ee81baada09f224f2a0c0353a50b36daee"} Jan 23 06:41:14 crc kubenswrapper[4784]: I0123 06:41:14.933708 4784 generic.go:334] "Generic (PLEG): container finished" podID="0b44b992-71dd-4aa8-aad0-9b323d47e8fb" containerID="b75b3e7bf5e6f4341b69a679701df366fecb241624bf2b744ba9a37364cc410e" exitCode=0 Jan 23 06:41:14 crc kubenswrapper[4784]: I0123 06:41:14.934044 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a197-account-create-update-q6j44" event={"ID":"0b44b992-71dd-4aa8-aad0-9b323d47e8fb","Type":"ContainerDied","Data":"b75b3e7bf5e6f4341b69a679701df366fecb241624bf2b744ba9a37364cc410e"} Jan 23 06:41:14 crc kubenswrapper[4784]: I0123 06:41:14.937971 4784 generic.go:334] "Generic (PLEG): container finished" podID="ba896b3e-c197-41d5-b182-f17f508d32b7" containerID="2108edd466edd8637d61fa8a9ba8630f95fef7790fe21b3a047d8479558cfe3d" exitCode=0 Jan 23 06:41:14 crc kubenswrapper[4784]: I0123 06:41:14.938098 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-48fe-account-create-update-cvjrf" event={"ID":"ba896b3e-c197-41d5-b182-f17f508d32b7","Type":"ContainerDied","Data":"2108edd466edd8637d61fa8a9ba8630f95fef7790fe21b3a047d8479558cfe3d"} Jan 23 06:41:14 crc kubenswrapper[4784]: I0123 06:41:14.953243 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"392f1e9d84d597cde76501d711831751c23755763cf4016e4f247b830cf1a214"} Jan 23 06:41:14 crc kubenswrapper[4784]: I0123 06:41:14.953320 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"a22996074fc130d5333ce428f105b8c267e0ccfc8634430321f1df0b1db37bc0"} Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.452238 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a197-account-create-update-q6j44" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.615585 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mx67\" (UniqueName: \"kubernetes.io/projected/0b44b992-71dd-4aa8-aad0-9b323d47e8fb-kube-api-access-4mx67\") pod \"0b44b992-71dd-4aa8-aad0-9b323d47e8fb\" (UID: \"0b44b992-71dd-4aa8-aad0-9b323d47e8fb\") " Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.615939 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b44b992-71dd-4aa8-aad0-9b323d47e8fb-operator-scripts\") pod \"0b44b992-71dd-4aa8-aad0-9b323d47e8fb\" (UID: \"0b44b992-71dd-4aa8-aad0-9b323d47e8fb\") " Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.617005 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b44b992-71dd-4aa8-aad0-9b323d47e8fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b44b992-71dd-4aa8-aad0-9b323d47e8fb" (UID: "0b44b992-71dd-4aa8-aad0-9b323d47e8fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.647083 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b44b992-71dd-4aa8-aad0-9b323d47e8fb-kube-api-access-4mx67" (OuterVolumeSpecName: "kube-api-access-4mx67") pod "0b44b992-71dd-4aa8-aad0-9b323d47e8fb" (UID: "0b44b992-71dd-4aa8-aad0-9b323d47e8fb"). InnerVolumeSpecName "kube-api-access-4mx67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.667426 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-r2prw" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.687519 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f5snt" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.696850 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2g628" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.718781 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b44b992-71dd-4aa8-aad0-9b323d47e8fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.718818 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mx67\" (UniqueName: \"kubernetes.io/projected/0b44b992-71dd-4aa8-aad0-9b323d47e8fb-kube-api-access-4mx67\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.821591 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d6bfb06-c97c-4f8d-8da9-ba12f6640bad-operator-scripts\") pod \"8d6bfb06-c97c-4f8d-8da9-ba12f6640bad\" (UID: \"8d6bfb06-c97c-4f8d-8da9-ba12f6640bad\") " Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.821775 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b439ced3-cccc-44d7-b249-a37d3505df26-operator-scripts\") pod \"b439ced3-cccc-44d7-b249-a37d3505df26\" (UID: \"b439ced3-cccc-44d7-b249-a37d3505df26\") " Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.821843 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkl7c\" (UniqueName: \"kubernetes.io/projected/8d6bfb06-c97c-4f8d-8da9-ba12f6640bad-kube-api-access-vkl7c\") pod \"8d6bfb06-c97c-4f8d-8da9-ba12f6640bad\" (UID: \"8d6bfb06-c97c-4f8d-8da9-ba12f6640bad\") " Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.821884 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5a0bd14-68e7-4973-ad97-42f2238300f5-operator-scripts\") pod \"a5a0bd14-68e7-4973-ad97-42f2238300f5\" (UID: \"a5a0bd14-68e7-4973-ad97-42f2238300f5\") " Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.821907 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpwc4\" (UniqueName: \"kubernetes.io/projected/a5a0bd14-68e7-4973-ad97-42f2238300f5-kube-api-access-rpwc4\") pod \"a5a0bd14-68e7-4973-ad97-42f2238300f5\" (UID: \"a5a0bd14-68e7-4973-ad97-42f2238300f5\") " Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.822030 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnnvw\" (UniqueName: \"kubernetes.io/projected/b439ced3-cccc-44d7-b249-a37d3505df26-kube-api-access-qnnvw\") pod \"b439ced3-cccc-44d7-b249-a37d3505df26\" (UID: \"b439ced3-cccc-44d7-b249-a37d3505df26\") " Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.822203 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d6bfb06-c97c-4f8d-8da9-ba12f6640bad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d6bfb06-c97c-4f8d-8da9-ba12f6640bad" (UID: "8d6bfb06-c97c-4f8d-8da9-ba12f6640bad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.822471 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d6bfb06-c97c-4f8d-8da9-ba12f6640bad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.823217 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5a0bd14-68e7-4973-ad97-42f2238300f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5a0bd14-68e7-4973-ad97-42f2238300f5" (UID: "a5a0bd14-68e7-4973-ad97-42f2238300f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.823534 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b439ced3-cccc-44d7-b249-a37d3505df26-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b439ced3-cccc-44d7-b249-a37d3505df26" (UID: "b439ced3-cccc-44d7-b249-a37d3505df26"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.835232 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6bfb06-c97c-4f8d-8da9-ba12f6640bad-kube-api-access-vkl7c" (OuterVolumeSpecName: "kube-api-access-vkl7c") pod "8d6bfb06-c97c-4f8d-8da9-ba12f6640bad" (UID: "8d6bfb06-c97c-4f8d-8da9-ba12f6640bad"). InnerVolumeSpecName "kube-api-access-vkl7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.837494 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5a0bd14-68e7-4973-ad97-42f2238300f5-kube-api-access-rpwc4" (OuterVolumeSpecName: "kube-api-access-rpwc4") pod "a5a0bd14-68e7-4973-ad97-42f2238300f5" (UID: "a5a0bd14-68e7-4973-ad97-42f2238300f5"). InnerVolumeSpecName "kube-api-access-rpwc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.856118 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b439ced3-cccc-44d7-b249-a37d3505df26-kube-api-access-qnnvw" (OuterVolumeSpecName: "kube-api-access-qnnvw") pod "b439ced3-cccc-44d7-b249-a37d3505df26" (UID: "b439ced3-cccc-44d7-b249-a37d3505df26"). InnerVolumeSpecName "kube-api-access-qnnvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.927354 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkl7c\" (UniqueName: \"kubernetes.io/projected/8d6bfb06-c97c-4f8d-8da9-ba12f6640bad-kube-api-access-vkl7c\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.927415 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5a0bd14-68e7-4973-ad97-42f2238300f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.927426 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpwc4\" (UniqueName: \"kubernetes.io/projected/a5a0bd14-68e7-4973-ad97-42f2238300f5-kube-api-access-rpwc4\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.927437 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnnvw\" (UniqueName: \"kubernetes.io/projected/b439ced3-cccc-44d7-b249-a37d3505df26-kube-api-access-qnnvw\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.927447 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b439ced3-cccc-44d7-b249-a37d3505df26-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.971214 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f5snt" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.972643 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f5snt" event={"ID":"8d6bfb06-c97c-4f8d-8da9-ba12f6640bad","Type":"ContainerDied","Data":"06e25402827ea6ae36a6711f73c33ab08647c43d730bd1eccf31ca8df73cdfe7"} Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.976130 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e25402827ea6ae36a6711f73c33ab08647c43d730bd1eccf31ca8df73cdfe7" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.982618 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a197-account-create-update-q6j44" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.985150 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a197-account-create-update-q6j44" event={"ID":"0b44b992-71dd-4aa8-aad0-9b323d47e8fb","Type":"ContainerDied","Data":"ca1174c4e8e36d4819cb1faa238745cf1426156b68e4aa944eb6cfa290d89879"} Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.985233 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca1174c4e8e36d4819cb1faa238745cf1426156b68e4aa944eb6cfa290d89879" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.991171 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-r2prw" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.991182 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-r2prw" event={"ID":"b439ced3-cccc-44d7-b249-a37d3505df26","Type":"ContainerDied","Data":"2c55c0af6b3bfcd0b16b1169892012d9b675cdebb6b29435c343d853178dc58e"} Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.991375 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c55c0af6b3bfcd0b16b1169892012d9b675cdebb6b29435c343d853178dc58e" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.993373 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2g628" Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.994516 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2g628" event={"ID":"a5a0bd14-68e7-4973-ad97-42f2238300f5","Type":"ContainerDied","Data":"62b54d53f14c655486c0f15bd2b0cb1a4ecde5aa0fb7b0515d376df1586a34e5"} Jan 23 06:41:15 crc kubenswrapper[4784]: I0123 06:41:15.994569 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62b54d53f14c655486c0f15bd2b0cb1a4ecde5aa0fb7b0515d376df1586a34e5" Jan 23 06:41:16 crc kubenswrapper[4784]: I0123 06:41:16.461791 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-48fe-account-create-update-cvjrf" Jan 23 06:41:16 crc kubenswrapper[4784]: I0123 06:41:16.478416 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c72e-account-create-update-flvx5" Jan 23 06:41:16 crc kubenswrapper[4784]: I0123 06:41:16.546015 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnlcm\" (UniqueName: \"kubernetes.io/projected/ba896b3e-c197-41d5-b182-f17f508d32b7-kube-api-access-qnlcm\") pod \"ba896b3e-c197-41d5-b182-f17f508d32b7\" (UID: \"ba896b3e-c197-41d5-b182-f17f508d32b7\") " Jan 23 06:41:16 crc kubenswrapper[4784]: I0123 06:41:16.546088 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba896b3e-c197-41d5-b182-f17f508d32b7-operator-scripts\") pod \"ba896b3e-c197-41d5-b182-f17f508d32b7\" (UID: \"ba896b3e-c197-41d5-b182-f17f508d32b7\") " Jan 23 06:41:16 crc kubenswrapper[4784]: I0123 06:41:16.549508 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba896b3e-c197-41d5-b182-f17f508d32b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba896b3e-c197-41d5-b182-f17f508d32b7" (UID: "ba896b3e-c197-41d5-b182-f17f508d32b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:16 crc kubenswrapper[4784]: I0123 06:41:16.557385 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba896b3e-c197-41d5-b182-f17f508d32b7-kube-api-access-qnlcm" (OuterVolumeSpecName: "kube-api-access-qnlcm") pod "ba896b3e-c197-41d5-b182-f17f508d32b7" (UID: "ba896b3e-c197-41d5-b182-f17f508d32b7"). InnerVolumeSpecName "kube-api-access-qnlcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:41:16 crc kubenswrapper[4784]: I0123 06:41:16.648183 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwz5k\" (UniqueName: \"kubernetes.io/projected/1df9e961-9c7f-49bc-aae3-018a4850e116-kube-api-access-rwz5k\") pod \"1df9e961-9c7f-49bc-aae3-018a4850e116\" (UID: \"1df9e961-9c7f-49bc-aae3-018a4850e116\") " Jan 23 06:41:16 crc kubenswrapper[4784]: I0123 06:41:16.648387 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df9e961-9c7f-49bc-aae3-018a4850e116-operator-scripts\") pod \"1df9e961-9c7f-49bc-aae3-018a4850e116\" (UID: \"1df9e961-9c7f-49bc-aae3-018a4850e116\") " Jan 23 06:41:16 crc kubenswrapper[4784]: I0123 06:41:16.649008 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnlcm\" (UniqueName: \"kubernetes.io/projected/ba896b3e-c197-41d5-b182-f17f508d32b7-kube-api-access-qnlcm\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:16 crc kubenswrapper[4784]: I0123 06:41:16.649035 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba896b3e-c197-41d5-b182-f17f508d32b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:16 crc kubenswrapper[4784]: I0123 06:41:16.649599 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1df9e961-9c7f-49bc-aae3-018a4850e116-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1df9e961-9c7f-49bc-aae3-018a4850e116" (UID: "1df9e961-9c7f-49bc-aae3-018a4850e116"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:16 crc kubenswrapper[4784]: I0123 06:41:16.652493 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df9e961-9c7f-49bc-aae3-018a4850e116-kube-api-access-rwz5k" (OuterVolumeSpecName: "kube-api-access-rwz5k") pod "1df9e961-9c7f-49bc-aae3-018a4850e116" (UID: "1df9e961-9c7f-49bc-aae3-018a4850e116"). InnerVolumeSpecName "kube-api-access-rwz5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:41:16 crc kubenswrapper[4784]: I0123 06:41:16.753565 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwz5k\" (UniqueName: \"kubernetes.io/projected/1df9e961-9c7f-49bc-aae3-018a4850e116-kube-api-access-rwz5k\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:16 crc kubenswrapper[4784]: I0123 06:41:16.753679 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1df9e961-9c7f-49bc-aae3-018a4850e116-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:17 crc kubenswrapper[4784]: I0123 06:41:17.004482 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c72e-account-create-update-flvx5" Jan 23 06:41:17 crc kubenswrapper[4784]: I0123 06:41:17.004936 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c72e-account-create-update-flvx5" event={"ID":"1df9e961-9c7f-49bc-aae3-018a4850e116","Type":"ContainerDied","Data":"40b7ebf7bad6a10366b6ba3f19fedc8a572abf0f4c96be015d745034c07c2713"} Jan 23 06:41:17 crc kubenswrapper[4784]: I0123 06:41:17.005018 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40b7ebf7bad6a10366b6ba3f19fedc8a572abf0f4c96be015d745034c07c2713" Jan 23 06:41:17 crc kubenswrapper[4784]: I0123 06:41:17.007111 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-48fe-account-create-update-cvjrf" event={"ID":"ba896b3e-c197-41d5-b182-f17f508d32b7","Type":"ContainerDied","Data":"08f6538926d5511dd705e6c41b38e3aaba2f1398f0717d849609fe6deb9c7576"} Jan 23 06:41:17 crc kubenswrapper[4784]: I0123 06:41:17.007174 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-48fe-account-create-update-cvjrf" Jan 23 06:41:17 crc kubenswrapper[4784]: I0123 06:41:17.007189 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f6538926d5511dd705e6c41b38e3aaba2f1398f0717d849609fe6deb9c7576" Jan 23 06:41:21 crc kubenswrapper[4784]: I0123 06:41:21.049903 4784 generic.go:334] "Generic (PLEG): container finished" podID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerID="2444e6e56e66e69329ca6d890998e8774bd28f660539aa049c86704a170fe184" exitCode=0 Jan 23 06:41:21 crc kubenswrapper[4784]: I0123 06:41:21.050457 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e974f78-4c17-480b-8a35-285a89f1cb35","Type":"ContainerDied","Data":"2444e6e56e66e69329ca6d890998e8774bd28f660539aa049c86704a170fe184"} Jan 23 06:41:35 crc kubenswrapper[4784]: E0123 06:41:35.678132 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Jan 23 06:41:35 crc kubenswrapper[4784]: E0123 06:41:35.679067 4784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Jan 23 06:41:35 crc kubenswrapper[4784]: E0123 06:41:35.679242 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldrjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-bkcjh_openstack(ada74437-66bf-4316-a16d-89377a5b5e41): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:41:35 crc kubenswrapper[4784]: E0123 06:41:35.680456 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-bkcjh" podUID="ada74437-66bf-4316-a16d-89377a5b5e41" Jan 23 06:41:36 crc kubenswrapper[4784]: I0123 06:41:36.220291 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fsq8w" event={"ID":"355a352a-3ae0-4db7-9a25-3588f4233973","Type":"ContainerStarted","Data":"6d5e9fdb4563a080ef3471fe24a63735b7ccee215761c9f86b20e8a6c91f39fd"} Jan 23 06:41:36 crc kubenswrapper[4784]: I0123 06:41:36.232418 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e974f78-4c17-480b-8a35-285a89f1cb35","Type":"ContainerStarted","Data":"6a28253ac0032048200290f552c55613ace6ed8d277a52da3224b5099aebf6b7"} Jan 23 06:41:36 crc kubenswrapper[4784]: I0123 06:41:36.246194 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"bf5ef9d067215ccad3a6ecd02ad55f1a12c7b9d51b69ade4de5e967b038060d9"} Jan 23 06:41:36 crc kubenswrapper[4784]: E0123 06:41:36.247942 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.50:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-bkcjh" podUID="ada74437-66bf-4316-a16d-89377a5b5e41" Jan 23 06:41:36 crc kubenswrapper[4784]: I0123 06:41:36.272138 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-fsq8w" podStartSLOduration=2.7551765599999998 podStartE2EDuration="27.272109379s" podCreationTimestamp="2026-01-23 06:41:09 +0000 UTC" firstStartedPulling="2026-01-23 06:41:11.127220534 +0000 UTC m=+1274.359728508" lastFinishedPulling="2026-01-23 06:41:35.644153343 +0000 UTC m=+1298.876661327" observedRunningTime="2026-01-23 06:41:36.245451923 +0000 UTC m=+1299.477959897" watchObservedRunningTime="2026-01-23 06:41:36.272109379 +0000 UTC m=+1299.504617353" Jan 23 06:41:37 crc kubenswrapper[4784]: I0123 06:41:37.279647 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"ec51e7f42e5e6673b8ede3bf860551d03487b1b27949389e7909cfa7bee994be"} Jan 23 06:41:37 crc kubenswrapper[4784]: I0123 06:41:37.280643 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"207e4997099eb002b764ec8f307d362988c96255279f4750d588b64501d1389f"} Jan 23 06:41:37 crc kubenswrapper[4784]: I0123 06:41:37.280657 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"9f10a04d43d2693016b8bdd590dbc263500747048d3c99ee58274552537ff1af"} Jan 23 06:41:37 crc kubenswrapper[4784]: I0123 06:41:37.280666 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"1f346b7f7087a269f48405ecf4e8c6373cc85bf00216c23a8896d5485b3a0a71"} Jan 23 06:41:37 crc kubenswrapper[4784]: I0123 06:41:37.286029 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dqv9q" event={"ID":"4a92a258-aeae-45d3-ac60-f5d9033a0e5c","Type":"ContainerStarted","Data":"5106bfcf0e4ae760d500cace7f3a85f1a6c5944ec65d8337657cbed981815e01"} Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.305665 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"bacaa53632a48cd2c1765dc6bf6b9a56378f7b4c1dece18f09763720726c0b2d"} Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.306190 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"abb5c886-7378-4bdd-b56a-cc803db75cbd","Type":"ContainerStarted","Data":"561db2ff3d39c2d359a6c50663e966d7e2d041956c6a9252852c1194ec194450"} Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.380579 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.550040592 podStartE2EDuration="1m15.380545807s" podCreationTimestamp="2026-01-23 06:40:23 +0000 UTC" firstStartedPulling="2026-01-23 06:40:57.815352439 +0000 UTC m=+1261.047860413" lastFinishedPulling="2026-01-23 06:41:35.645857644 +0000 UTC m=+1298.878365628" observedRunningTime="2026-01-23 06:41:38.369198578 +0000 UTC m=+1301.601706582" watchObservedRunningTime="2026-01-23 06:41:38.380545807 +0000 UTC m=+1301.613053781" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.382647 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-dqv9q" podStartSLOduration=3.534973458 podStartE2EDuration="51.382635229s" podCreationTimestamp="2026-01-23 06:40:47 +0000 UTC" firstStartedPulling="2026-01-23 06:40:48.070015565 +0000 UTC m=+1251.302523539" lastFinishedPulling="2026-01-23 06:41:35.917677336 +0000 UTC m=+1299.150185310" observedRunningTime="2026-01-23 06:41:37.312183205 +0000 UTC m=+1300.544691179" watchObservedRunningTime="2026-01-23 06:41:38.382635229 +0000 UTC m=+1301.615143203" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.815795 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9nk9j"] Jan 23 06:41:38 crc kubenswrapper[4784]: E0123 06:41:38.816387 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba896b3e-c197-41d5-b182-f17f508d32b7" containerName="mariadb-account-create-update" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.816416 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba896b3e-c197-41d5-b182-f17f508d32b7" containerName="mariadb-account-create-update" Jan 23 06:41:38 crc kubenswrapper[4784]: E0123 06:41:38.816429 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b44b992-71dd-4aa8-aad0-9b323d47e8fb" containerName="mariadb-account-create-update" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.816439 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b44b992-71dd-4aa8-aad0-9b323d47e8fb" containerName="mariadb-account-create-update" Jan 23 06:41:38 crc kubenswrapper[4784]: E0123 06:41:38.816455 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b439ced3-cccc-44d7-b249-a37d3505df26" containerName="mariadb-database-create" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.816463 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b439ced3-cccc-44d7-b249-a37d3505df26" containerName="mariadb-database-create" Jan 23 06:41:38 crc kubenswrapper[4784]: E0123 06:41:38.816491 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a0bd14-68e7-4973-ad97-42f2238300f5" containerName="mariadb-database-create" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.816502 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a0bd14-68e7-4973-ad97-42f2238300f5" containerName="mariadb-database-create" Jan 23 06:41:38 crc kubenswrapper[4784]: E0123 06:41:38.816522 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df9e961-9c7f-49bc-aae3-018a4850e116" containerName="mariadb-account-create-update" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.816531 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df9e961-9c7f-49bc-aae3-018a4850e116" containerName="mariadb-account-create-update" Jan 23 06:41:38 crc kubenswrapper[4784]: E0123 06:41:38.816558 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6bfb06-c97c-4f8d-8da9-ba12f6640bad" containerName="mariadb-database-create" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.816568 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6bfb06-c97c-4f8d-8da9-ba12f6640bad" containerName="mariadb-database-create" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.816812 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="b439ced3-cccc-44d7-b249-a37d3505df26" containerName="mariadb-database-create" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.816848 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d6bfb06-c97c-4f8d-8da9-ba12f6640bad" containerName="mariadb-database-create" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.816862 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="1df9e961-9c7f-49bc-aae3-018a4850e116" containerName="mariadb-account-create-update" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.816884 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5a0bd14-68e7-4973-ad97-42f2238300f5" containerName="mariadb-database-create" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.816899 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b44b992-71dd-4aa8-aad0-9b323d47e8fb" containerName="mariadb-account-create-update" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.816909 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba896b3e-c197-41d5-b182-f17f508d32b7" containerName="mariadb-account-create-update" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.818359 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.821461 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.836498 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9nk9j"] Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.955784 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-dns-svc\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.955977 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26sls\" (UniqueName: \"kubernetes.io/projected/438793e6-8343-481b-bcc1-756ab75fafa9-kube-api-access-26sls\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.956252 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.956498 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.956536 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-config\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:38 crc kubenswrapper[4784]: I0123 06:41:38.956558 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:39 crc kubenswrapper[4784]: I0123 06:41:39.059198 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:39 crc kubenswrapper[4784]: I0123 06:41:39.059370 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:39 crc kubenswrapper[4784]: I0123 06:41:39.059410 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-config\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:39 crc kubenswrapper[4784]: I0123 06:41:39.059434 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:39 crc kubenswrapper[4784]: I0123 06:41:39.059510 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-dns-svc\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:39 crc kubenswrapper[4784]: I0123 06:41:39.059557 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26sls\" (UniqueName: \"kubernetes.io/projected/438793e6-8343-481b-bcc1-756ab75fafa9-kube-api-access-26sls\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:39 crc kubenswrapper[4784]: I0123 06:41:39.060658 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:39 crc kubenswrapper[4784]: I0123 06:41:39.060744 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:39 crc kubenswrapper[4784]: I0123 06:41:39.061184 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:39 crc kubenswrapper[4784]: I0123 06:41:39.061347 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-dns-svc\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:39 crc kubenswrapper[4784]: I0123 06:41:39.061685 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-config\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:39 crc kubenswrapper[4784]: I0123 06:41:39.087886 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26sls\" (UniqueName: \"kubernetes.io/projected/438793e6-8343-481b-bcc1-756ab75fafa9-kube-api-access-26sls\") pod \"dnsmasq-dns-764c5664d7-9nk9j\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:39 crc kubenswrapper[4784]: I0123 06:41:39.157958 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:39 crc kubenswrapper[4784]: I0123 06:41:39.680024 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9nk9j"] Jan 23 06:41:39 crc kubenswrapper[4784]: W0123 06:41:39.688289 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod438793e6_8343_481b_bcc1_756ab75fafa9.slice/crio-f792f3c5d2517874cb188b4f1afb1ddad9b7707679757e7165d2924b40bf83a9 WatchSource:0}: Error finding container f792f3c5d2517874cb188b4f1afb1ddad9b7707679757e7165d2924b40bf83a9: Status 404 returned error can't find the container with id f792f3c5d2517874cb188b4f1afb1ddad9b7707679757e7165d2924b40bf83a9 Jan 23 06:41:40 crc kubenswrapper[4784]: I0123 06:41:40.327864 4784 generic.go:334] "Generic (PLEG): container finished" podID="438793e6-8343-481b-bcc1-756ab75fafa9" containerID="b4632ea93910d46165ca15446c30eb6b1fe02ffe1993b6bdcfd58f3ed2265ee1" exitCode=0 Jan 23 06:41:40 crc kubenswrapper[4784]: I0123 06:41:40.327970 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" event={"ID":"438793e6-8343-481b-bcc1-756ab75fafa9","Type":"ContainerDied","Data":"b4632ea93910d46165ca15446c30eb6b1fe02ffe1993b6bdcfd58f3ed2265ee1"} Jan 23 06:41:40 crc kubenswrapper[4784]: I0123 06:41:40.328520 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" event={"ID":"438793e6-8343-481b-bcc1-756ab75fafa9","Type":"ContainerStarted","Data":"f792f3c5d2517874cb188b4f1afb1ddad9b7707679757e7165d2924b40bf83a9"} Jan 23 06:41:40 crc kubenswrapper[4784]: I0123 06:41:40.335206 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e974f78-4c17-480b-8a35-285a89f1cb35","Type":"ContainerStarted","Data":"d0692b3ffba15cc1bbd3a3e2067b83e61e9daa350591185fb3f1858b49370111"} Jan 23 06:41:40 crc kubenswrapper[4784]: I0123 06:41:40.335274 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e974f78-4c17-480b-8a35-285a89f1cb35","Type":"ContainerStarted","Data":"4d908ed3f9dc347b45dda7638e1e76f6fb1e971d09085a04efeab7668f9a0ddd"} Jan 23 06:41:40 crc kubenswrapper[4784]: I0123 06:41:40.402700 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=33.402665544 podStartE2EDuration="33.402665544s" podCreationTimestamp="2026-01-23 06:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:41:40.385643505 +0000 UTC m=+1303.618151479" watchObservedRunningTime="2026-01-23 06:41:40.402665544 +0000 UTC m=+1303.635173518" Jan 23 06:41:41 crc kubenswrapper[4784]: I0123 06:41:41.350046 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" event={"ID":"438793e6-8343-481b-bcc1-756ab75fafa9","Type":"ContainerStarted","Data":"1a6b90a944c741207275eaafffafefddc2f832a53480b2f5814cf94bf14617a0"} Jan 23 06:41:41 crc kubenswrapper[4784]: I0123 06:41:41.391320 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" podStartSLOduration=3.391287796 podStartE2EDuration="3.391287796s" podCreationTimestamp="2026-01-23 06:41:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:41:41.381123215 +0000 UTC m=+1304.613631209" watchObservedRunningTime="2026-01-23 06:41:41.391287796 +0000 UTC m=+1304.623795770" Jan 23 06:41:42 crc kubenswrapper[4784]: I0123 06:41:42.367413 4784 generic.go:334] "Generic (PLEG): container finished" podID="355a352a-3ae0-4db7-9a25-3588f4233973" containerID="6d5e9fdb4563a080ef3471fe24a63735b7ccee215761c9f86b20e8a6c91f39fd" exitCode=0 Jan 23 06:41:42 crc kubenswrapper[4784]: I0123 06:41:42.368676 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fsq8w" event={"ID":"355a352a-3ae0-4db7-9a25-3588f4233973","Type":"ContainerDied","Data":"6d5e9fdb4563a080ef3471fe24a63735b7ccee215761c9f86b20e8a6c91f39fd"} Jan 23 06:41:42 crc kubenswrapper[4784]: I0123 06:41:42.368850 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:43 crc kubenswrapper[4784]: I0123 06:41:43.279076 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:43 crc kubenswrapper[4784]: I0123 06:41:43.755517 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fsq8w" Jan 23 06:41:43 crc kubenswrapper[4784]: I0123 06:41:43.869023 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/355a352a-3ae0-4db7-9a25-3588f4233973-combined-ca-bundle\") pod \"355a352a-3ae0-4db7-9a25-3588f4233973\" (UID: \"355a352a-3ae0-4db7-9a25-3588f4233973\") " Jan 23 06:41:43 crc kubenswrapper[4784]: I0123 06:41:43.869105 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/355a352a-3ae0-4db7-9a25-3588f4233973-config-data\") pod \"355a352a-3ae0-4db7-9a25-3588f4233973\" (UID: \"355a352a-3ae0-4db7-9a25-3588f4233973\") " Jan 23 06:41:43 crc kubenswrapper[4784]: I0123 06:41:43.869175 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g95zz\" (UniqueName: \"kubernetes.io/projected/355a352a-3ae0-4db7-9a25-3588f4233973-kube-api-access-g95zz\") pod \"355a352a-3ae0-4db7-9a25-3588f4233973\" (UID: \"355a352a-3ae0-4db7-9a25-3588f4233973\") " Jan 23 06:41:43 crc kubenswrapper[4784]: I0123 06:41:43.895714 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/355a352a-3ae0-4db7-9a25-3588f4233973-kube-api-access-g95zz" (OuterVolumeSpecName: "kube-api-access-g95zz") pod "355a352a-3ae0-4db7-9a25-3588f4233973" (UID: "355a352a-3ae0-4db7-9a25-3588f4233973"). InnerVolumeSpecName "kube-api-access-g95zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:41:43 crc kubenswrapper[4784]: I0123 06:41:43.906515 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/355a352a-3ae0-4db7-9a25-3588f4233973-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "355a352a-3ae0-4db7-9a25-3588f4233973" (UID: "355a352a-3ae0-4db7-9a25-3588f4233973"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:41:43 crc kubenswrapper[4784]: I0123 06:41:43.939256 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/355a352a-3ae0-4db7-9a25-3588f4233973-config-data" (OuterVolumeSpecName: "config-data") pod "355a352a-3ae0-4db7-9a25-3588f4233973" (UID: "355a352a-3ae0-4db7-9a25-3588f4233973"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:41:43 crc kubenswrapper[4784]: I0123 06:41:43.972074 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/355a352a-3ae0-4db7-9a25-3588f4233973-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:43 crc kubenswrapper[4784]: I0123 06:41:43.972117 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/355a352a-3ae0-4db7-9a25-3588f4233973-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:43 crc kubenswrapper[4784]: I0123 06:41:43.972130 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g95zz\" (UniqueName: \"kubernetes.io/projected/355a352a-3ae0-4db7-9a25-3588f4233973-kube-api-access-g95zz\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.390471 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fsq8w" event={"ID":"355a352a-3ae0-4db7-9a25-3588f4233973","Type":"ContainerDied","Data":"030cbcc1bf108c4e929ef1f92e06a2c4c0ea4fbb584be7c056e2c0c62a6e88ae"} Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.390537 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="030cbcc1bf108c4e929ef1f92e06a2c4c0ea4fbb584be7c056e2c0c62a6e88ae" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.390586 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fsq8w" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.792922 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-prp2g"] Jan 23 06:41:44 crc kubenswrapper[4784]: E0123 06:41:44.794245 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355a352a-3ae0-4db7-9a25-3588f4233973" containerName="keystone-db-sync" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.794316 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="355a352a-3ae0-4db7-9a25-3588f4233973" containerName="keystone-db-sync" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.794580 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="355a352a-3ae0-4db7-9a25-3588f4233973" containerName="keystone-db-sync" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.795538 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.802811 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.802822 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.803019 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.803024 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2zq2z" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.803180 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.836735 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-prp2g"] Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.875848 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9nk9j"] Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.876193 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" podUID="438793e6-8343-481b-bcc1-756ab75fafa9" containerName="dnsmasq-dns" containerID="cri-o://1a6b90a944c741207275eaafffafefddc2f832a53480b2f5814cf94bf14617a0" gracePeriod=10 Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.891081 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-scripts\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.891826 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-credential-keys\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.891912 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-fernet-keys\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.891970 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwlvt\" (UniqueName: \"kubernetes.io/projected/d39d8227-6e54-402b-9f33-fba0f70ba5e9-kube-api-access-nwlvt\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.892017 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-config-data\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.892138 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-combined-ca-bundle\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.993981 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-credential-keys\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.994067 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-fernet-keys\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.994208 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwlvt\" (UniqueName: \"kubernetes.io/projected/d39d8227-6e54-402b-9f33-fba0f70ba5e9-kube-api-access-nwlvt\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.994244 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-config-data\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.994304 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-combined-ca-bundle\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:44 crc kubenswrapper[4784]: I0123 06:41:44.994339 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-scripts\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.001904 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-credential-keys\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.028828 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-9xn2f"] Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.032295 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-config-data\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.032470 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.041470 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-combined-ca-bundle\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.041934 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-scripts\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.042506 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-fernet-keys\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.099880 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwlvt\" (UniqueName: \"kubernetes.io/projected/d39d8227-6e54-402b-9f33-fba0f70ba5e9-kube-api-access-nwlvt\") pod \"keystone-bootstrap-prp2g\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.102401 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.102503 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-config\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.102528 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.102565 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.102637 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bhrj\" (UniqueName: \"kubernetes.io/projected/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-kube-api-access-6bhrj\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.102660 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-dns-svc\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.113459 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.146195 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-9xn2f"] Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.205792 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bhrj\" (UniqueName: \"kubernetes.io/projected/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-kube-api-access-6bhrj\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.205860 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-dns-svc\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.205896 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.205935 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-config\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.205951 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.205989 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.207024 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.207942 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.208225 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-config\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.208800 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-dns-svc\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.218893 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.249665 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c4886c77c-bjbwq"] Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.251508 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.296450 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.296650 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-rx6m2" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.296790 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.296919 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.303736 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bhrj\" (UniqueName: \"kubernetes.io/projected/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-kube-api-access-6bhrj\") pod \"dnsmasq-dns-5959f8865f-9xn2f\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.320588 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-config-data\") pod \"horizon-7c4886c77c-bjbwq\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.320726 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-horizon-secret-key\") pod \"horizon-7c4886c77c-bjbwq\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.320853 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-logs\") pod \"horizon-7c4886c77c-bjbwq\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.321252 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vppvv\" (UniqueName: \"kubernetes.io/projected/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-kube-api-access-vppvv\") pod \"horizon-7c4886c77c-bjbwq\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.321424 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-scripts\") pod \"horizon-7c4886c77c-bjbwq\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.349898 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c4886c77c-bjbwq"] Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.414585 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-tvpzc"] Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.421425 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.424898 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-scripts\") pod \"horizon-7c4886c77c-bjbwq\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.424999 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-config-data\") pod \"horizon-7c4886c77c-bjbwq\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.425067 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-horizon-secret-key\") pod \"horizon-7c4886c77c-bjbwq\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.425151 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-logs\") pod \"horizon-7c4886c77c-bjbwq\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.425223 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vppvv\" (UniqueName: \"kubernetes.io/projected/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-kube-api-access-vppvv\") pod \"horizon-7c4886c77c-bjbwq\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.426505 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-logs\") pod \"horizon-7c4886c77c-bjbwq\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.426684 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-scripts\") pod \"horizon-7c4886c77c-bjbwq\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.428405 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-config-data\") pod \"horizon-7c4886c77c-bjbwq\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.435429 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-horizon-secret-key\") pod \"horizon-7c4886c77c-bjbwq\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.435568 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-47crq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.435925 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.440078 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.450443 4784 generic.go:334] "Generic (PLEG): container finished" podID="438793e6-8343-481b-bcc1-756ab75fafa9" containerID="1a6b90a944c741207275eaafffafefddc2f832a53480b2f5814cf94bf14617a0" exitCode=0 Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.450734 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" event={"ID":"438793e6-8343-481b-bcc1-756ab75fafa9","Type":"ContainerDied","Data":"1a6b90a944c741207275eaafffafefddc2f832a53480b2f5814cf94bf14617a0"} Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.474895 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vppvv\" (UniqueName: \"kubernetes.io/projected/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-kube-api-access-vppvv\") pod \"horizon-7c4886c77c-bjbwq\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.503843 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tvpzc"] Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.531494 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knvfd\" (UniqueName: \"kubernetes.io/projected/e52f206e-7230-4c60-a8c2-ad6cebabc434-kube-api-access-knvfd\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.531814 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-combined-ca-bundle\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.531957 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e52f206e-7230-4c60-a8c2-ad6cebabc434-etc-machine-id\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.532070 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-db-sync-config-data\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.532160 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-config-data\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.532311 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-scripts\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.549958 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.552124 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.561608 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.569204 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.588468 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.600242 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.649810 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.650107 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-db-sync-config-data\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.650182 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.650206 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-config-data\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.650268 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-run-httpd\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.650313 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-scripts\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.650352 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmrfk\" (UniqueName: \"kubernetes.io/projected/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-kube-api-access-kmrfk\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.650388 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-config-data\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.650420 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-scripts\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.650447 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.650487 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knvfd\" (UniqueName: \"kubernetes.io/projected/e52f206e-7230-4c60-a8c2-ad6cebabc434-kube-api-access-knvfd\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.650642 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-log-httpd\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.650670 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-combined-ca-bundle\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.650734 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e52f206e-7230-4c60-a8c2-ad6cebabc434-etc-machine-id\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.650860 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e52f206e-7230-4c60-a8c2-ad6cebabc434-etc-machine-id\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.658924 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-db-sync-config-data\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.669585 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-config-data\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.685149 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-scripts\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.703486 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-combined-ca-bundle\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.712353 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knvfd\" (UniqueName: \"kubernetes.io/projected/e52f206e-7230-4c60-a8c2-ad6cebabc434-kube-api-access-knvfd\") pod \"cinder-db-sync-tvpzc\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.715245 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-cf94j"] Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.729573 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cf94j" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.746239 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.746474 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.752686 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bslv4" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.754256 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.754324 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-run-httpd\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.754384 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-scripts\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.754412 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmrfk\" (UniqueName: \"kubernetes.io/projected/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-kube-api-access-kmrfk\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.754458 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-config-data\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.754483 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.754539 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-log-httpd\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.755331 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-log-httpd\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.756220 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-run-httpd\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.759832 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.760124 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-scripts\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.763106 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.774231 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.775313 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66b8497d5c-b9c75"] Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.777132 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.780840 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-config-data\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.801883 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmrfk\" (UniqueName: \"kubernetes.io/projected/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-kube-api-access-kmrfk\") pod \"ceilometer-0\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.827525 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cf94j"] Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.843982 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-g49wt"] Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.846164 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g49wt" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.851492 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.851639 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kr9c9" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.856515 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4b3eb4e-6408-461a-b330-f95ea1716c9e-horizon-secret-key\") pod \"horizon-66b8497d5c-b9c75\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.856572 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4b3eb4e-6408-461a-b330-f95ea1716c9e-config-data\") pod \"horizon-66b8497d5c-b9c75\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.856618 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d8e7e9-165a-4248-a591-e47f1313c8d0-combined-ca-bundle\") pod \"neutron-db-sync-cf94j\" (UID: \"e5d8e7e9-165a-4248-a591-e47f1313c8d0\") " pod="openstack/neutron-db-sync-cf94j" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.856668 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm8v7\" (UniqueName: \"kubernetes.io/projected/e5d8e7e9-165a-4248-a591-e47f1313c8d0-kube-api-access-vm8v7\") pod \"neutron-db-sync-cf94j\" (UID: \"e5d8e7e9-165a-4248-a591-e47f1313c8d0\") " pod="openstack/neutron-db-sync-cf94j" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.856687 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5d8e7e9-165a-4248-a591-e47f1313c8d0-config\") pod \"neutron-db-sync-cf94j\" (UID: \"e5d8e7e9-165a-4248-a591-e47f1313c8d0\") " pod="openstack/neutron-db-sync-cf94j" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.856707 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4b3eb4e-6408-461a-b330-f95ea1716c9e-scripts\") pod \"horizon-66b8497d5c-b9c75\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.856799 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4b3eb4e-6408-461a-b330-f95ea1716c9e-logs\") pod \"horizon-66b8497d5c-b9c75\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.856829 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms7rx\" (UniqueName: \"kubernetes.io/projected/d4b3eb4e-6408-461a-b330-f95ea1716c9e-kube-api-access-ms7rx\") pod \"horizon-66b8497d5c-b9c75\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.857441 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66b8497d5c-b9c75"] Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.903732 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.920999 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g49wt"] Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.960499 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-combined-ca-bundle\") pod \"barbican-db-sync-g49wt\" (UID: \"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f\") " pod="openstack/barbican-db-sync-g49wt" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.960600 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm8v7\" (UniqueName: \"kubernetes.io/projected/e5d8e7e9-165a-4248-a591-e47f1313c8d0-kube-api-access-vm8v7\") pod \"neutron-db-sync-cf94j\" (UID: \"e5d8e7e9-165a-4248-a591-e47f1313c8d0\") " pod="openstack/neutron-db-sync-cf94j" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.960642 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5d8e7e9-165a-4248-a591-e47f1313c8d0-config\") pod \"neutron-db-sync-cf94j\" (UID: \"e5d8e7e9-165a-4248-a591-e47f1313c8d0\") " pod="openstack/neutron-db-sync-cf94j" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.960666 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqmjx\" (UniqueName: \"kubernetes.io/projected/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-kube-api-access-dqmjx\") pod \"barbican-db-sync-g49wt\" (UID: \"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f\") " pod="openstack/barbican-db-sync-g49wt" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.960686 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4b3eb4e-6408-461a-b330-f95ea1716c9e-scripts\") pod \"horizon-66b8497d5c-b9c75\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.960764 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4b3eb4e-6408-461a-b330-f95ea1716c9e-logs\") pod \"horizon-66b8497d5c-b9c75\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.960792 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms7rx\" (UniqueName: \"kubernetes.io/projected/d4b3eb4e-6408-461a-b330-f95ea1716c9e-kube-api-access-ms7rx\") pod \"horizon-66b8497d5c-b9c75\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.960820 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-db-sync-config-data\") pod \"barbican-db-sync-g49wt\" (UID: \"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f\") " pod="openstack/barbican-db-sync-g49wt" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.960866 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4b3eb4e-6408-461a-b330-f95ea1716c9e-horizon-secret-key\") pod \"horizon-66b8497d5c-b9c75\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.960916 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4b3eb4e-6408-461a-b330-f95ea1716c9e-config-data\") pod \"horizon-66b8497d5c-b9c75\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.960961 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d8e7e9-165a-4248-a591-e47f1313c8d0-combined-ca-bundle\") pod \"neutron-db-sync-cf94j\" (UID: \"e5d8e7e9-165a-4248-a591-e47f1313c8d0\") " pod="openstack/neutron-db-sync-cf94j" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.962289 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4b3eb4e-6408-461a-b330-f95ea1716c9e-scripts\") pod \"horizon-66b8497d5c-b9c75\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.962686 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4b3eb4e-6408-461a-b330-f95ea1716c9e-logs\") pod \"horizon-66b8497d5c-b9c75\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.967511 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4b3eb4e-6408-461a-b330-f95ea1716c9e-config-data\") pod \"horizon-66b8497d5c-b9c75\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:45 crc kubenswrapper[4784]: I0123 06:41:45.981728 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5d8e7e9-165a-4248-a591-e47f1313c8d0-config\") pod \"neutron-db-sync-cf94j\" (UID: \"e5d8e7e9-165a-4248-a591-e47f1313c8d0\") " pod="openstack/neutron-db-sync-cf94j" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.003203 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-9xn2f"] Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.018296 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms7rx\" (UniqueName: \"kubernetes.io/projected/d4b3eb4e-6408-461a-b330-f95ea1716c9e-kube-api-access-ms7rx\") pod \"horizon-66b8497d5c-b9c75\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.019857 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d8e7e9-165a-4248-a591-e47f1313c8d0-combined-ca-bundle\") pod \"neutron-db-sync-cf94j\" (UID: \"e5d8e7e9-165a-4248-a591-e47f1313c8d0\") " pod="openstack/neutron-db-sync-cf94j" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.027579 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-pzpcf"] Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.033476 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.039083 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gz8cs" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.039104 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.040423 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4b3eb4e-6408-461a-b330-f95ea1716c9e-horizon-secret-key\") pod \"horizon-66b8497d5c-b9c75\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.040935 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.064616 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-pzpcf"] Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.064659 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-combined-ca-bundle\") pod \"barbican-db-sync-g49wt\" (UID: \"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f\") " pod="openstack/barbican-db-sync-g49wt" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.064865 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqmjx\" (UniqueName: \"kubernetes.io/projected/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-kube-api-access-dqmjx\") pod \"barbican-db-sync-g49wt\" (UID: \"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f\") " pod="openstack/barbican-db-sync-g49wt" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.064939 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-db-sync-config-data\") pod \"barbican-db-sync-g49wt\" (UID: \"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f\") " pod="openstack/barbican-db-sync-g49wt" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.066379 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm8v7\" (UniqueName: \"kubernetes.io/projected/e5d8e7e9-165a-4248-a591-e47f1313c8d0-kube-api-access-vm8v7\") pod \"neutron-db-sync-cf94j\" (UID: \"e5d8e7e9-165a-4248-a591-e47f1313c8d0\") " pod="openstack/neutron-db-sync-cf94j" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.082426 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-db-sync-config-data\") pod \"barbican-db-sync-g49wt\" (UID: \"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f\") " pod="openstack/barbican-db-sync-g49wt" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.083593 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-combined-ca-bundle\") pod \"barbican-db-sync-g49wt\" (UID: \"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f\") " pod="openstack/barbican-db-sync-g49wt" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.102100 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-dn8t4"] Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.107023 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.121735 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-dn8t4"] Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.132158 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqmjx\" (UniqueName: \"kubernetes.io/projected/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-kube-api-access-dqmjx\") pod \"barbican-db-sync-g49wt\" (UID: \"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f\") " pod="openstack/barbican-db-sync-g49wt" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.139694 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cf94j" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.169827 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-scripts\") pod \"placement-db-sync-pzpcf\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.169898 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d58b6a2a-7217-4621-8e1a-c8297e74a086-logs\") pod \"placement-db-sync-pzpcf\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.169982 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-config\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.170082 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2rqw\" (UniqueName: \"kubernetes.io/projected/e68bf7fd-1be9-45bf-889e-49003e6bd028-kube-api-access-z2rqw\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.170103 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.170122 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-combined-ca-bundle\") pod \"placement-db-sync-pzpcf\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.170218 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.170251 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.170291 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm455\" (UniqueName: \"kubernetes.io/projected/d58b6a2a-7217-4621-8e1a-c8297e74a086-kube-api-access-dm455\") pod \"placement-db-sync-pzpcf\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.170344 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-config-data\") pod \"placement-db-sync-pzpcf\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.170401 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.182785 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.221283 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g49wt" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.272288 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.272334 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.272364 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm455\" (UniqueName: \"kubernetes.io/projected/d58b6a2a-7217-4621-8e1a-c8297e74a086-kube-api-access-dm455\") pod \"placement-db-sync-pzpcf\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.272410 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-config-data\") pod \"placement-db-sync-pzpcf\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.272438 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.272464 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-scripts\") pod \"placement-db-sync-pzpcf\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.272489 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d58b6a2a-7217-4621-8e1a-c8297e74a086-logs\") pod \"placement-db-sync-pzpcf\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.272523 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-config\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.272562 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2rqw\" (UniqueName: \"kubernetes.io/projected/e68bf7fd-1be9-45bf-889e-49003e6bd028-kube-api-access-z2rqw\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.272581 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.272600 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-combined-ca-bundle\") pod \"placement-db-sync-pzpcf\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.283325 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.284849 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.289642 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-combined-ca-bundle\") pod \"placement-db-sync-pzpcf\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.292209 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d58b6a2a-7217-4621-8e1a-c8297e74a086-logs\") pod \"placement-db-sync-pzpcf\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.293344 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.293490 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.293555 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-config\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.296739 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-scripts\") pod \"placement-db-sync-pzpcf\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.298337 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2rqw\" (UniqueName: \"kubernetes.io/projected/e68bf7fd-1be9-45bf-889e-49003e6bd028-kube-api-access-z2rqw\") pod \"dnsmasq-dns-58dd9ff6bc-dn8t4\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.298517 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-config-data\") pod \"placement-db-sync-pzpcf\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.310365 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm455\" (UniqueName: \"kubernetes.io/projected/d58b6a2a-7217-4621-8e1a-c8297e74a086-kube-api-access-dm455\") pod \"placement-db-sync-pzpcf\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.355709 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-prp2g"] Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.366910 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.483385 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-dns-swift-storage-0\") pod \"438793e6-8343-481b-bcc1-756ab75fafa9\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.483466 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-dns-svc\") pod \"438793e6-8343-481b-bcc1-756ab75fafa9\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.483486 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-ovsdbserver-nb\") pod \"438793e6-8343-481b-bcc1-756ab75fafa9\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.483505 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-ovsdbserver-sb\") pod \"438793e6-8343-481b-bcc1-756ab75fafa9\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.483549 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26sls\" (UniqueName: \"kubernetes.io/projected/438793e6-8343-481b-bcc1-756ab75fafa9-kube-api-access-26sls\") pod \"438793e6-8343-481b-bcc1-756ab75fafa9\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.483722 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-config\") pod \"438793e6-8343-481b-bcc1-756ab75fafa9\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.505595 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" event={"ID":"438793e6-8343-481b-bcc1-756ab75fafa9","Type":"ContainerDied","Data":"f792f3c5d2517874cb188b4f1afb1ddad9b7707679757e7165d2924b40bf83a9"} Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.505697 4784 scope.go:117] "RemoveContainer" containerID="1a6b90a944c741207275eaafffafefddc2f832a53480b2f5814cf94bf14617a0" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.510454 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438793e6-8343-481b-bcc1-756ab75fafa9-kube-api-access-26sls" (OuterVolumeSpecName: "kube-api-access-26sls") pod "438793e6-8343-481b-bcc1-756ab75fafa9" (UID: "438793e6-8343-481b-bcc1-756ab75fafa9"). InnerVolumeSpecName "kube-api-access-26sls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.510891 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9nk9j" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.524795 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-prp2g" event={"ID":"d39d8227-6e54-402b-9f33-fba0f70ba5e9","Type":"ContainerStarted","Data":"efe9d36d5c4d712ea363a050e27f09af6a7b4c16b3fdc2415d205498609b4ecf"} Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.541777 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pzpcf" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.580644 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.587625 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "438793e6-8343-481b-bcc1-756ab75fafa9" (UID: "438793e6-8343-481b-bcc1-756ab75fafa9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.592730 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-dns-swift-storage-0\") pod \"438793e6-8343-481b-bcc1-756ab75fafa9\" (UID: \"438793e6-8343-481b-bcc1-756ab75fafa9\") " Jan 23 06:41:46 crc kubenswrapper[4784]: W0123 06:41:46.592956 4784 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/438793e6-8343-481b-bcc1-756ab75fafa9/volumes/kubernetes.io~configmap/dns-swift-storage-0 Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.592991 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "438793e6-8343-481b-bcc1-756ab75fafa9" (UID: "438793e6-8343-481b-bcc1-756ab75fafa9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.593480 4784 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.593505 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26sls\" (UniqueName: \"kubernetes.io/projected/438793e6-8343-481b-bcc1-756ab75fafa9-kube-api-access-26sls\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.653155 4784 scope.go:117] "RemoveContainer" containerID="b4632ea93910d46165ca15446c30eb6b1fe02ffe1993b6bdcfd58f3ed2265ee1" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.654819 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "438793e6-8343-481b-bcc1-756ab75fafa9" (UID: "438793e6-8343-481b-bcc1-756ab75fafa9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.666064 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "438793e6-8343-481b-bcc1-756ab75fafa9" (UID: "438793e6-8343-481b-bcc1-756ab75fafa9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.705043 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "438793e6-8343-481b-bcc1-756ab75fafa9" (UID: "438793e6-8343-481b-bcc1-756ab75fafa9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.707406 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.707441 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.707455 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.709132 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-config" (OuterVolumeSpecName: "config") pod "438793e6-8343-481b-bcc1-756ab75fafa9" (UID: "438793e6-8343-481b-bcc1-756ab75fafa9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.726227 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c4886c77c-bjbwq"] Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.738555 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-9xn2f"] Jan 23 06:41:46 crc kubenswrapper[4784]: W0123 06:41:46.749926 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ea4ea5a_c921_4c5e_8450_c3311c24ae27.slice/crio-7267d8a4f95728a455a638bb5d8d14feb934b279f1178f3fcfe6accefa9422a1 WatchSource:0}: Error finding container 7267d8a4f95728a455a638bb5d8d14feb934b279f1178f3fcfe6accefa9422a1: Status 404 returned error can't find the container with id 7267d8a4f95728a455a638bb5d8d14feb934b279f1178f3fcfe6accefa9422a1 Jan 23 06:41:46 crc kubenswrapper[4784]: W0123 06:41:46.751259 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e8cb020_1f3e_4f06_8d8e_4f2c9c0e0df7.slice/crio-405bb0aeb71a92048599967d10bcde60ca662949e457ae946b6161cc94c29371 WatchSource:0}: Error finding container 405bb0aeb71a92048599967d10bcde60ca662949e457ae946b6161cc94c29371: Status 404 returned error can't find the container with id 405bb0aeb71a92048599967d10bcde60ca662949e457ae946b6161cc94c29371 Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.810606 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/438793e6-8343-481b-bcc1-756ab75fafa9-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.884430 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9nk9j"] Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.959019 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9nk9j"] Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.979818 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:41:46 crc kubenswrapper[4784]: I0123 06:41:46.997533 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tvpzc"] Jan 23 06:41:47 crc kubenswrapper[4784]: I0123 06:41:47.172879 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-g49wt"] Jan 23 06:41:47 crc kubenswrapper[4784]: I0123 06:41:47.326131 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="438793e6-8343-481b-bcc1-756ab75fafa9" path="/var/lib/kubelet/pods/438793e6-8343-481b-bcc1-756ab75fafa9/volumes" Jan 23 06:41:47 crc kubenswrapper[4784]: I0123 06:41:47.438858 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cf94j"] Jan 23 06:41:47 crc kubenswrapper[4784]: I0123 06:41:47.721864 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cf94j" event={"ID":"e5d8e7e9-165a-4248-a591-e47f1313c8d0","Type":"ContainerStarted","Data":"108a33da46786bec17e6998ce767a90b056b0dd1a4ebfec2e1047988ef9d63c4"} Jan 23 06:41:47 crc kubenswrapper[4784]: I0123 06:41:47.738166 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4886c77c-bjbwq" event={"ID":"9ea4ea5a-c921-4c5e-8450-c3311c24ae27","Type":"ContainerStarted","Data":"7267d8a4f95728a455a638bb5d8d14feb934b279f1178f3fcfe6accefa9422a1"} Jan 23 06:41:47 crc kubenswrapper[4784]: I0123 06:41:47.764616 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66b8497d5c-b9c75"] Jan 23 06:41:47 crc kubenswrapper[4784]: I0123 06:41:47.833191 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-pzpcf"] Jan 23 06:41:47 crc kubenswrapper[4784]: I0123 06:41:47.833534 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a78b6d3-fcc8-4cc3-a549-c0ba13460333","Type":"ContainerStarted","Data":"bf023f14f1a80b339c6f2a5bb4c08aadbcd1a8088f73402d075d5b64d54f307b"} Jan 23 06:41:47 crc kubenswrapper[4784]: I0123 06:41:47.849061 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tvpzc" event={"ID":"e52f206e-7230-4c60-a8c2-ad6cebabc434","Type":"ContainerStarted","Data":"6f84ad448a3aa46e346d1ec998c6283c889b41c454bc562176fc403bca41584a"} Jan 23 06:41:47 crc kubenswrapper[4784]: I0123 06:41:47.877000 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g49wt" event={"ID":"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f","Type":"ContainerStarted","Data":"190c1f011387c1f8c0912399a10f530dda5739704cdc0665b3cccfb9e469d462"} Jan 23 06:41:47 crc kubenswrapper[4784]: I0123 06:41:47.878938 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-prp2g" event={"ID":"d39d8227-6e54-402b-9f33-fba0f70ba5e9","Type":"ContainerStarted","Data":"484ad734e3569d976c619f2d62e3ec503464dbed1e626752d09c8197e0a2e812"} Jan 23 06:41:47 crc kubenswrapper[4784]: I0123 06:41:47.891500 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" event={"ID":"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7","Type":"ContainerStarted","Data":"405bb0aeb71a92048599967d10bcde60ca662949e457ae946b6161cc94c29371"} Jan 23 06:41:47 crc kubenswrapper[4784]: I0123 06:41:47.891711 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" podUID="8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7" containerName="init" containerID="cri-o://6637698ea0b71a50c8afc9e46d132e706e49ee180fdd230281125b7a843b8ccc" gracePeriod=10 Jan 23 06:41:47 crc kubenswrapper[4784]: I0123 06:41:47.914710 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-prp2g" podStartSLOduration=3.914687769 podStartE2EDuration="3.914687769s" podCreationTimestamp="2026-01-23 06:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:41:47.911349687 +0000 UTC m=+1311.143857681" watchObservedRunningTime="2026-01-23 06:41:47.914687769 +0000 UTC m=+1311.147195743" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.172894 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-dn8t4"] Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.299431 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66b8497d5c-b9c75"] Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.451890 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d6578c897-tbcz5"] Jan 23 06:41:48 crc kubenswrapper[4784]: E0123 06:41:48.452556 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438793e6-8343-481b-bcc1-756ab75fafa9" containerName="dnsmasq-dns" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.452573 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="438793e6-8343-481b-bcc1-756ab75fafa9" containerName="dnsmasq-dns" Jan 23 06:41:48 crc kubenswrapper[4784]: E0123 06:41:48.452593 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438793e6-8343-481b-bcc1-756ab75fafa9" containerName="init" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.452600 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="438793e6-8343-481b-bcc1-756ab75fafa9" containerName="init" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.452894 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="438793e6-8343-481b-bcc1-756ab75fafa9" containerName="dnsmasq-dns" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.454940 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.522075 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0f48798-7520-4adb-ac99-0752c9d76303-config-data\") pod \"horizon-7d6578c897-tbcz5\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.525142 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d6578c897-tbcz5"] Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.526538 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c7rf\" (UniqueName: \"kubernetes.io/projected/a0f48798-7520-4adb-ac99-0752c9d76303-kube-api-access-8c7rf\") pod \"horizon-7d6578c897-tbcz5\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.526616 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0f48798-7520-4adb-ac99-0752c9d76303-scripts\") pod \"horizon-7d6578c897-tbcz5\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.526946 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0f48798-7520-4adb-ac99-0752c9d76303-logs\") pod \"horizon-7d6578c897-tbcz5\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.527113 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a0f48798-7520-4adb-ac99-0752c9d76303-horizon-secret-key\") pod \"horizon-7d6578c897-tbcz5\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.632253 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0f48798-7520-4adb-ac99-0752c9d76303-config-data\") pod \"horizon-7d6578c897-tbcz5\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.632614 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c7rf\" (UniqueName: \"kubernetes.io/projected/a0f48798-7520-4adb-ac99-0752c9d76303-kube-api-access-8c7rf\") pod \"horizon-7d6578c897-tbcz5\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.632694 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0f48798-7520-4adb-ac99-0752c9d76303-scripts\") pod \"horizon-7d6578c897-tbcz5\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.632818 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0f48798-7520-4adb-ac99-0752c9d76303-logs\") pod \"horizon-7d6578c897-tbcz5\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.632916 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a0f48798-7520-4adb-ac99-0752c9d76303-horizon-secret-key\") pod \"horizon-7d6578c897-tbcz5\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.635709 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0f48798-7520-4adb-ac99-0752c9d76303-scripts\") pod \"horizon-7d6578c897-tbcz5\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.636199 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0f48798-7520-4adb-ac99-0752c9d76303-config-data\") pod \"horizon-7d6578c897-tbcz5\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.640847 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0f48798-7520-4adb-ac99-0752c9d76303-logs\") pod \"horizon-7d6578c897-tbcz5\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.658651 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c7rf\" (UniqueName: \"kubernetes.io/projected/a0f48798-7520-4adb-ac99-0752c9d76303-kube-api-access-8c7rf\") pod \"horizon-7d6578c897-tbcz5\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.661130 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.662457 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a0f48798-7520-4adb-ac99-0752c9d76303-horizon-secret-key\") pod \"horizon-7d6578c897-tbcz5\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.703896 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.828415 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.838549 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-ovsdbserver-sb\") pod \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.838606 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bhrj\" (UniqueName: \"kubernetes.io/projected/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-kube-api-access-6bhrj\") pod \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.838651 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-config\") pod \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.838688 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-dns-swift-storage-0\") pod \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.838744 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-dns-svc\") pod \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.838843 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-ovsdbserver-nb\") pod \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\" (UID: \"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7\") " Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.856245 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-kube-api-access-6bhrj" (OuterVolumeSpecName: "kube-api-access-6bhrj") pod "8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7" (UID: "8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7"). InnerVolumeSpecName "kube-api-access-6bhrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.899840 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7" (UID: "8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.899930 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7" (UID: "8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.901057 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-config" (OuterVolumeSpecName: "config") pod "8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7" (UID: "8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.905373 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7" (UID: "8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.940815 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.940850 4784 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.940861 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.940871 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.940880 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bhrj\" (UniqueName: \"kubernetes.io/projected/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-kube-api-access-6bhrj\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.948286 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66b8497d5c-b9c75" event={"ID":"d4b3eb4e-6408-461a-b330-f95ea1716c9e","Type":"ContainerStarted","Data":"dc7b0fae0c78d89f3084c7a6aff0d797165522c8ec69c6f7cec4e7338e2650d7"} Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.957842 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7" (UID: "8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:48 crc kubenswrapper[4784]: I0123 06:41:48.981416 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-bkcjh" event={"ID":"ada74437-66bf-4316-a16d-89377a5b5e41","Type":"ContainerStarted","Data":"ac5813abde32d54186db6d3ca0af0b6805c61158c989e027279cdd483152c8e0"} Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.014746 4784 generic.go:334] "Generic (PLEG): container finished" podID="8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7" containerID="6637698ea0b71a50c8afc9e46d132e706e49ee180fdd230281125b7a843b8ccc" exitCode=0 Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.015064 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.015579 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" event={"ID":"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7","Type":"ContainerDied","Data":"405bb0aeb71a92048599967d10bcde60ca662949e457ae946b6161cc94c29371"} Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.015649 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-9xn2f" event={"ID":"8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7","Type":"ContainerDied","Data":"6637698ea0b71a50c8afc9e46d132e706e49ee180fdd230281125b7a843b8ccc"} Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.015675 4784 scope.go:117] "RemoveContainer" containerID="6637698ea0b71a50c8afc9e46d132e706e49ee180fdd230281125b7a843b8ccc" Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.041018 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pzpcf" event={"ID":"d58b6a2a-7217-4621-8e1a-c8297e74a086","Type":"ContainerStarted","Data":"1cc607afce43a96145c1c5899b234668de438eb608eeb4eba653c90091892163"} Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.042908 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.055820 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-bkcjh" podStartSLOduration=4.348064333 podStartE2EDuration="41.055785879s" podCreationTimestamp="2026-01-23 06:41:08 +0000 UTC" firstStartedPulling="2026-01-23 06:41:11.120127639 +0000 UTC m=+1274.352635613" lastFinishedPulling="2026-01-23 06:41:47.827849185 +0000 UTC m=+1311.060357159" observedRunningTime="2026-01-23 06:41:49.038142985 +0000 UTC m=+1312.270650979" watchObservedRunningTime="2026-01-23 06:41:49.055785879 +0000 UTC m=+1312.288293863" Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.078157 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cf94j" event={"ID":"e5d8e7e9-165a-4248-a591-e47f1313c8d0","Type":"ContainerStarted","Data":"ed9b5a18514a804502fc6eb516d4ee9fd16d688e0d6471302220d67e35cab39f"} Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.097353 4784 generic.go:334] "Generic (PLEG): container finished" podID="4a92a258-aeae-45d3-ac60-f5d9033a0e5c" containerID="5106bfcf0e4ae760d500cace7f3a85f1a6c5944ec65d8337657cbed981815e01" exitCode=0 Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.098024 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dqv9q" event={"ID":"4a92a258-aeae-45d3-ac60-f5d9033a0e5c","Type":"ContainerDied","Data":"5106bfcf0e4ae760d500cace7f3a85f1a6c5944ec65d8337657cbed981815e01"} Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.113987 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-cf94j" podStartSLOduration=4.113959778 podStartE2EDuration="4.113959778s" podCreationTimestamp="2026-01-23 06:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:41:49.108954466 +0000 UTC m=+1312.341462450" watchObservedRunningTime="2026-01-23 06:41:49.113959778 +0000 UTC m=+1312.346467752" Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.129894 4784 generic.go:334] "Generic (PLEG): container finished" podID="e68bf7fd-1be9-45bf-889e-49003e6bd028" containerID="0dab6e53e78c98d09765afd9f8eda0e9e65226a12250b7eeccc478b868561926" exitCode=0 Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.130051 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" event={"ID":"e68bf7fd-1be9-45bf-889e-49003e6bd028","Type":"ContainerDied","Data":"0dab6e53e78c98d09765afd9f8eda0e9e65226a12250b7eeccc478b868561926"} Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.130119 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" event={"ID":"e68bf7fd-1be9-45bf-889e-49003e6bd028","Type":"ContainerStarted","Data":"541a17a9e83aa12dc948b098064b6bebb0010538bce4089f29baa525d35e7cff"} Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.188067 4784 scope.go:117] "RemoveContainer" containerID="6637698ea0b71a50c8afc9e46d132e706e49ee180fdd230281125b7a843b8ccc" Jan 23 06:41:49 crc kubenswrapper[4784]: E0123 06:41:49.194419 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6637698ea0b71a50c8afc9e46d132e706e49ee180fdd230281125b7a843b8ccc\": container with ID starting with 6637698ea0b71a50c8afc9e46d132e706e49ee180fdd230281125b7a843b8ccc not found: ID does not exist" containerID="6637698ea0b71a50c8afc9e46d132e706e49ee180fdd230281125b7a843b8ccc" Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.194486 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6637698ea0b71a50c8afc9e46d132e706e49ee180fdd230281125b7a843b8ccc"} err="failed to get container status \"6637698ea0b71a50c8afc9e46d132e706e49ee180fdd230281125b7a843b8ccc\": rpc error: code = NotFound desc = could not find container \"6637698ea0b71a50c8afc9e46d132e706e49ee180fdd230281125b7a843b8ccc\": container with ID starting with 6637698ea0b71a50c8afc9e46d132e706e49ee180fdd230281125b7a843b8ccc not found: ID does not exist" Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.405217 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-9xn2f"] Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.405282 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-9xn2f"] Jan 23 06:41:49 crc kubenswrapper[4784]: E0123 06:41:49.498562 4784 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e8cb020_1f3e_4f06_8d8e_4f2c9c0e0df7.slice\": RecentStats: unable to find data in memory cache]" Jan 23 06:41:49 crc kubenswrapper[4784]: I0123 06:41:49.667110 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d6578c897-tbcz5"] Jan 23 06:41:50 crc kubenswrapper[4784]: I0123 06:41:50.180374 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" event={"ID":"e68bf7fd-1be9-45bf-889e-49003e6bd028","Type":"ContainerStarted","Data":"cdfdd36179206b216a99681d4c2e167b1fb0093b8a4fe3c3ca9aafee26e8c3c9"} Jan 23 06:41:50 crc kubenswrapper[4784]: I0123 06:41:50.180983 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:50 crc kubenswrapper[4784]: I0123 06:41:50.201940 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d6578c897-tbcz5" event={"ID":"a0f48798-7520-4adb-ac99-0752c9d76303","Type":"ContainerStarted","Data":"4ef0023a488285afc4ced227f2a51f0b4848af2915177996256957fda59e4db7"} Jan 23 06:41:50 crc kubenswrapper[4784]: I0123 06:41:50.230726 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" podStartSLOduration=5.230701701 podStartE2EDuration="5.230701701s" podCreationTimestamp="2026-01-23 06:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:41:50.211073829 +0000 UTC m=+1313.443581803" watchObservedRunningTime="2026-01-23 06:41:50.230701701 +0000 UTC m=+1313.463209675" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.022003 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dqv9q" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.133073 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqtx6\" (UniqueName: \"kubernetes.io/projected/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-kube-api-access-nqtx6\") pod \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.133183 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-combined-ca-bundle\") pod \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.133237 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-db-sync-config-data\") pod \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.133480 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-config-data\") pod \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\" (UID: \"4a92a258-aeae-45d3-ac60-f5d9033a0e5c\") " Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.143001 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4a92a258-aeae-45d3-ac60-f5d9033a0e5c" (UID: "4a92a258-aeae-45d3-ac60-f5d9033a0e5c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.152336 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-kube-api-access-nqtx6" (OuterVolumeSpecName: "kube-api-access-nqtx6") pod "4a92a258-aeae-45d3-ac60-f5d9033a0e5c" (UID: "4a92a258-aeae-45d3-ac60-f5d9033a0e5c"). InnerVolumeSpecName "kube-api-access-nqtx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.183684 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a92a258-aeae-45d3-ac60-f5d9033a0e5c" (UID: "4a92a258-aeae-45d3-ac60-f5d9033a0e5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.239501 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqtx6\" (UniqueName: \"kubernetes.io/projected/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-kube-api-access-nqtx6\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.240020 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.240140 4784 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.288713 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dqv9q" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.291873 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7" path="/var/lib/kubelet/pods/8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7/volumes" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.304579 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dqv9q" event={"ID":"4a92a258-aeae-45d3-ac60-f5d9033a0e5c","Type":"ContainerDied","Data":"d6eab9bcc8158d83921983fb6c600b45f459ff202ce1421e79e866eee8a12f69"} Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.306192 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6eab9bcc8158d83921983fb6c600b45f459ff202ce1421e79e866eee8a12f69" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.330330 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-config-data" (OuterVolumeSpecName: "config-data") pod "4a92a258-aeae-45d3-ac60-f5d9033a0e5c" (UID: "4a92a258-aeae-45d3-ac60-f5d9033a0e5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.345680 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a92a258-aeae-45d3-ac60-f5d9033a0e5c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.763409 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-dn8t4"] Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.804566 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qmblm"] Jan 23 06:41:51 crc kubenswrapper[4784]: E0123 06:41:51.805131 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a92a258-aeae-45d3-ac60-f5d9033a0e5c" containerName="glance-db-sync" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.805151 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a92a258-aeae-45d3-ac60-f5d9033a0e5c" containerName="glance-db-sync" Jan 23 06:41:51 crc kubenswrapper[4784]: E0123 06:41:51.805183 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7" containerName="init" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.805191 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7" containerName="init" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.805398 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a92a258-aeae-45d3-ac60-f5d9033a0e5c" containerName="glance-db-sync" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.805456 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8cb020-1f3e-4f06-8d8e-4f2c9c0e0df7" containerName="init" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.806614 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.850138 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qmblm"] Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.883443 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.883509 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mspdp\" (UniqueName: \"kubernetes.io/projected/d7d605a9-5002-443e-b7d3-8e8cb7922d10-kube-api-access-mspdp\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.883534 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.883594 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-config\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.883616 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.883655 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.989251 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.989316 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-config\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.989366 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.989447 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.989481 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mspdp\" (UniqueName: \"kubernetes.io/projected/d7d605a9-5002-443e-b7d3-8e8cb7922d10-kube-api-access-mspdp\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.989505 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.990594 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.991709 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.991916 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.993976 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:51 crc kubenswrapper[4784]: I0123 06:41:51.994201 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-config\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.019081 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mspdp\" (UniqueName: \"kubernetes.io/projected/d7d605a9-5002-443e-b7d3-8e8cb7922d10-kube-api-access-mspdp\") pod \"dnsmasq-dns-785d8bcb8c-qmblm\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.148371 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.583148 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.585362 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.589710 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zwc5d" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.590057 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.590305 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.630645 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.710034 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.710171 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-scripts\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.710202 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.710235 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c41c5000-9962-4f05-af14-ded819d94650-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.710281 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41c5000-9962-4f05-af14-ded819d94650-logs\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.710323 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q28s4\" (UniqueName: \"kubernetes.io/projected/c41c5000-9962-4f05-af14-ded819d94650-kube-api-access-q28s4\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.710376 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-config-data\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.812190 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.812316 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-scripts\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.812345 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.812370 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c41c5000-9962-4f05-af14-ded819d94650-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.812407 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41c5000-9962-4f05-af14-ded819d94650-logs\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.812444 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q28s4\" (UniqueName: \"kubernetes.io/projected/c41c5000-9962-4f05-af14-ded819d94650-kube-api-access-q28s4\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.812486 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-config-data\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.814210 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.816108 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41c5000-9962-4f05-af14-ded819d94650-logs\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.821109 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c41c5000-9962-4f05-af14-ded819d94650-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.824656 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.858555 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q28s4\" (UniqueName: \"kubernetes.io/projected/c41c5000-9962-4f05-af14-ded819d94650-kube-api-access-q28s4\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.864953 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-scripts\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:52 crc kubenswrapper[4784]: I0123 06:41:52.947334 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-config-data\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.039043 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " pod="openstack/glance-default-external-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.078652 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qmblm"] Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.091299 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.095236 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.100207 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.135969 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.231518 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.231594 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.231623 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7t4t\" (UniqueName: \"kubernetes.io/projected/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-kube-api-access-z7t4t\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.231640 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.231663 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.231720 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.231783 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.244476 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.333422 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7t4t\" (UniqueName: \"kubernetes.io/projected/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-kube-api-access-z7t4t\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.333465 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.333490 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.333555 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.333598 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.333678 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.333712 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.335581 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.335723 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.336225 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.338381 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.341049 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-logs\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.347224 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" event={"ID":"d7d605a9-5002-443e-b7d3-8e8cb7922d10","Type":"ContainerStarted","Data":"310e959582154ca9aec094e69aea13c21b2a3839c6bfb622c3e461724c7f90db"} Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.347234 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" podUID="e68bf7fd-1be9-45bf-889e-49003e6bd028" containerName="dnsmasq-dns" containerID="cri-o://cdfdd36179206b216a99681d4c2e167b1fb0093b8a4fe3c3ca9aafee26e8c3c9" gracePeriod=10 Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.349335 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.354832 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.362999 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.373377 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7t4t\" (UniqueName: \"kubernetes.io/projected/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-kube-api-access-z7t4t\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.377408 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.414183 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:41:53 crc kubenswrapper[4784]: I0123 06:41:53.441908 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.060224 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.158146 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-dns-svc\") pod \"e68bf7fd-1be9-45bf-889e-49003e6bd028\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.158415 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2rqw\" (UniqueName: \"kubernetes.io/projected/e68bf7fd-1be9-45bf-889e-49003e6bd028-kube-api-access-z2rqw\") pod \"e68bf7fd-1be9-45bf-889e-49003e6bd028\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.158448 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-config\") pod \"e68bf7fd-1be9-45bf-889e-49003e6bd028\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.158498 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-ovsdbserver-sb\") pod \"e68bf7fd-1be9-45bf-889e-49003e6bd028\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.158527 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-ovsdbserver-nb\") pod \"e68bf7fd-1be9-45bf-889e-49003e6bd028\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.158552 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-dns-swift-storage-0\") pod \"e68bf7fd-1be9-45bf-889e-49003e6bd028\" (UID: \"e68bf7fd-1be9-45bf-889e-49003e6bd028\") " Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.197190 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68bf7fd-1be9-45bf-889e-49003e6bd028-kube-api-access-z2rqw" (OuterVolumeSpecName: "kube-api-access-z2rqw") pod "e68bf7fd-1be9-45bf-889e-49003e6bd028" (UID: "e68bf7fd-1be9-45bf-889e-49003e6bd028"). InnerVolumeSpecName "kube-api-access-z2rqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.253338 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e68bf7fd-1be9-45bf-889e-49003e6bd028" (UID: "e68bf7fd-1be9-45bf-889e-49003e6bd028"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.262659 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2rqw\" (UniqueName: \"kubernetes.io/projected/e68bf7fd-1be9-45bf-889e-49003e6bd028-kube-api-access-z2rqw\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.262702 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.282477 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e68bf7fd-1be9-45bf-889e-49003e6bd028" (UID: "e68bf7fd-1be9-45bf-889e-49003e6bd028"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.291164 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e68bf7fd-1be9-45bf-889e-49003e6bd028" (UID: "e68bf7fd-1be9-45bf-889e-49003e6bd028"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.299923 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e68bf7fd-1be9-45bf-889e-49003e6bd028" (UID: "e68bf7fd-1be9-45bf-889e-49003e6bd028"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.318524 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-config" (OuterVolumeSpecName: "config") pod "e68bf7fd-1be9-45bf-889e-49003e6bd028" (UID: "e68bf7fd-1be9-45bf-889e-49003e6bd028"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.360106 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" event={"ID":"d7d605a9-5002-443e-b7d3-8e8cb7922d10","Type":"ContainerStarted","Data":"40c4a05a94f51a488463fa7f5b63f64f6b3c42d44559d059ac6f7cb94411647d"} Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.367222 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.367291 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.367309 4784 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.367322 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e68bf7fd-1be9-45bf-889e-49003e6bd028-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.390139 4784 generic.go:334] "Generic (PLEG): container finished" podID="d39d8227-6e54-402b-9f33-fba0f70ba5e9" containerID="484ad734e3569d976c619f2d62e3ec503464dbed1e626752d09c8197e0a2e812" exitCode=0 Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.390273 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-prp2g" event={"ID":"d39d8227-6e54-402b-9f33-fba0f70ba5e9","Type":"ContainerDied","Data":"484ad734e3569d976c619f2d62e3ec503464dbed1e626752d09c8197e0a2e812"} Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.404359 4784 generic.go:334] "Generic (PLEG): container finished" podID="e68bf7fd-1be9-45bf-889e-49003e6bd028" containerID="cdfdd36179206b216a99681d4c2e167b1fb0093b8a4fe3c3ca9aafee26e8c3c9" exitCode=0 Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.405193 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" event={"ID":"e68bf7fd-1be9-45bf-889e-49003e6bd028","Type":"ContainerDied","Data":"cdfdd36179206b216a99681d4c2e167b1fb0093b8a4fe3c3ca9aafee26e8c3c9"} Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.405226 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.405256 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-dn8t4" event={"ID":"e68bf7fd-1be9-45bf-889e-49003e6bd028","Type":"ContainerDied","Data":"541a17a9e83aa12dc948b098064b6bebb0010538bce4089f29baa525d35e7cff"} Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.405281 4784 scope.go:117] "RemoveContainer" containerID="cdfdd36179206b216a99681d4c2e167b1fb0093b8a4fe3c3ca9aafee26e8c3c9" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.515823 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-dn8t4"] Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.523787 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-dn8t4"] Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.588560 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.620813 4784 scope.go:117] "RemoveContainer" containerID="0dab6e53e78c98d09765afd9f8eda0e9e65226a12250b7eeccc478b868561926" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.655384 4784 scope.go:117] "RemoveContainer" containerID="cdfdd36179206b216a99681d4c2e167b1fb0093b8a4fe3c3ca9aafee26e8c3c9" Jan 23 06:41:54 crc kubenswrapper[4784]: E0123 06:41:54.655945 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdfdd36179206b216a99681d4c2e167b1fb0093b8a4fe3c3ca9aafee26e8c3c9\": container with ID starting with cdfdd36179206b216a99681d4c2e167b1fb0093b8a4fe3c3ca9aafee26e8c3c9 not found: ID does not exist" containerID="cdfdd36179206b216a99681d4c2e167b1fb0093b8a4fe3c3ca9aafee26e8c3c9" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.656008 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdfdd36179206b216a99681d4c2e167b1fb0093b8a4fe3c3ca9aafee26e8c3c9"} err="failed to get container status \"cdfdd36179206b216a99681d4c2e167b1fb0093b8a4fe3c3ca9aafee26e8c3c9\": rpc error: code = NotFound desc = could not find container \"cdfdd36179206b216a99681d4c2e167b1fb0093b8a4fe3c3ca9aafee26e8c3c9\": container with ID starting with cdfdd36179206b216a99681d4c2e167b1fb0093b8a4fe3c3ca9aafee26e8c3c9 not found: ID does not exist" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.656054 4784 scope.go:117] "RemoveContainer" containerID="0dab6e53e78c98d09765afd9f8eda0e9e65226a12250b7eeccc478b868561926" Jan 23 06:41:54 crc kubenswrapper[4784]: E0123 06:41:54.658703 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dab6e53e78c98d09765afd9f8eda0e9e65226a12250b7eeccc478b868561926\": container with ID starting with 0dab6e53e78c98d09765afd9f8eda0e9e65226a12250b7eeccc478b868561926 not found: ID does not exist" containerID="0dab6e53e78c98d09765afd9f8eda0e9e65226a12250b7eeccc478b868561926" Jan 23 06:41:54 crc kubenswrapper[4784]: I0123 06:41:54.658909 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dab6e53e78c98d09765afd9f8eda0e9e65226a12250b7eeccc478b868561926"} err="failed to get container status \"0dab6e53e78c98d09765afd9f8eda0e9e65226a12250b7eeccc478b868561926\": rpc error: code = NotFound desc = could not find container \"0dab6e53e78c98d09765afd9f8eda0e9e65226a12250b7eeccc478b868561926\": container with ID starting with 0dab6e53e78c98d09765afd9f8eda0e9e65226a12250b7eeccc478b868561926 not found: ID does not exist" Jan 23 06:41:55 crc kubenswrapper[4784]: I0123 06:41:55.151264 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:41:55 crc kubenswrapper[4784]: I0123 06:41:55.278575 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e68bf7fd-1be9-45bf-889e-49003e6bd028" path="/var/lib/kubelet/pods/e68bf7fd-1be9-45bf-889e-49003e6bd028/volumes" Jan 23 06:41:55 crc kubenswrapper[4784]: I0123 06:41:55.428531 4784 generic.go:334] "Generic (PLEG): container finished" podID="ada74437-66bf-4316-a16d-89377a5b5e41" containerID="ac5813abde32d54186db6d3ca0af0b6805c61158c989e027279cdd483152c8e0" exitCode=0 Jan 23 06:41:55 crc kubenswrapper[4784]: I0123 06:41:55.428611 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-bkcjh" event={"ID":"ada74437-66bf-4316-a16d-89377a5b5e41","Type":"ContainerDied","Data":"ac5813abde32d54186db6d3ca0af0b6805c61158c989e027279cdd483152c8e0"} Jan 23 06:41:55 crc kubenswrapper[4784]: I0123 06:41:55.445303 4784 generic.go:334] "Generic (PLEG): container finished" podID="d7d605a9-5002-443e-b7d3-8e8cb7922d10" containerID="40c4a05a94f51a488463fa7f5b63f64f6b3c42d44559d059ac6f7cb94411647d" exitCode=0 Jan 23 06:41:55 crc kubenswrapper[4784]: I0123 06:41:55.445400 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" event={"ID":"d7d605a9-5002-443e-b7d3-8e8cb7922d10","Type":"ContainerDied","Data":"40c4a05a94f51a488463fa7f5b63f64f6b3c42d44559d059ac6f7cb94411647d"} Jan 23 06:41:55 crc kubenswrapper[4784]: I0123 06:41:55.445451 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" event={"ID":"d7d605a9-5002-443e-b7d3-8e8cb7922d10","Type":"ContainerStarted","Data":"3d4f4bb952328d1f23873c0cd87f9f9d3819f53848d99269277b7e75b003b372"} Jan 23 06:41:55 crc kubenswrapper[4784]: I0123 06:41:55.448593 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:41:55 crc kubenswrapper[4784]: I0123 06:41:55.453220 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ad89b25-5d2b-4562-8d38-8df9e359e8a6","Type":"ContainerStarted","Data":"e1e039beceff8a6f17bfc8490dca73b2306506925204cf145dc8d9884bbdcdb8"} Jan 23 06:41:55 crc kubenswrapper[4784]: I0123 06:41:55.479084 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" podStartSLOduration=4.479057413 podStartE2EDuration="4.479057413s" podCreationTimestamp="2026-01-23 06:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:41:55.473186109 +0000 UTC m=+1318.705694103" watchObservedRunningTime="2026-01-23 06:41:55.479057413 +0000 UTC m=+1318.711565387" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.013723 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c4886c77c-bjbwq"] Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.081211 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-79d47d6854-hfx9p"] Jan 23 06:41:56 crc kubenswrapper[4784]: E0123 06:41:56.081809 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68bf7fd-1be9-45bf-889e-49003e6bd028" containerName="init" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.081825 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68bf7fd-1be9-45bf-889e-49003e6bd028" containerName="init" Jan 23 06:41:56 crc kubenswrapper[4784]: E0123 06:41:56.081844 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68bf7fd-1be9-45bf-889e-49003e6bd028" containerName="dnsmasq-dns" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.081850 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68bf7fd-1be9-45bf-889e-49003e6bd028" containerName="dnsmasq-dns" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.082090 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68bf7fd-1be9-45bf-889e-49003e6bd028" containerName="dnsmasq-dns" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.083272 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.088555 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.100622 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79d47d6854-hfx9p"] Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.127841 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.156083 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d6578c897-tbcz5"] Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.179260 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7szqd\" (UniqueName: \"kubernetes.io/projected/8d31d380-7e87-4ce6-bbfe-5f3788456978-kube-api-access-7szqd\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.179987 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d31d380-7e87-4ce6-bbfe-5f3788456978-scripts\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.180115 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d31d380-7e87-4ce6-bbfe-5f3788456978-logs\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.180196 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-combined-ca-bundle\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.180308 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d31d380-7e87-4ce6-bbfe-5f3788456978-config-data\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.180418 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-horizon-secret-key\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.180549 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-horizon-tls-certs\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.184069 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.216476 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65775dd4cd-wxtf2"] Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.219096 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.231667 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65775dd4cd-wxtf2"] Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.285180 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d31d380-7e87-4ce6-bbfe-5f3788456978-logs\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.284303 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d31d380-7e87-4ce6-bbfe-5f3788456978-logs\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.286484 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-combined-ca-bundle\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.288243 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d31d380-7e87-4ce6-bbfe-5f3788456978-config-data\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.288673 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-horizon-secret-key\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.289128 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-horizon-tls-certs\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.289528 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d31d380-7e87-4ce6-bbfe-5f3788456978-config-data\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.289738 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7szqd\" (UniqueName: \"kubernetes.io/projected/8d31d380-7e87-4ce6-bbfe-5f3788456978-kube-api-access-7szqd\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.289960 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d31d380-7e87-4ce6-bbfe-5f3788456978-scripts\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.291053 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d31d380-7e87-4ce6-bbfe-5f3788456978-scripts\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.304730 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-horizon-tls-certs\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.304845 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-horizon-secret-key\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.305506 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-combined-ca-bundle\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.312399 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7szqd\" (UniqueName: \"kubernetes.io/projected/8d31d380-7e87-4ce6-bbfe-5f3788456978-kube-api-access-7szqd\") pod \"horizon-79d47d6854-hfx9p\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.393873 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a1391cd-fdf4-4770-ba43-17cb0657e117-horizon-tls-certs\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.394329 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a1391cd-fdf4-4770-ba43-17cb0657e117-scripts\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.394780 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mns24\" (UniqueName: \"kubernetes.io/projected/9a1391cd-fdf4-4770-ba43-17cb0657e117-kube-api-access-mns24\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.397365 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a1391cd-fdf4-4770-ba43-17cb0657e117-horizon-secret-key\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.397560 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a1391cd-fdf4-4770-ba43-17cb0657e117-config-data\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.397732 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a1391cd-fdf4-4770-ba43-17cb0657e117-logs\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.397917 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a1391cd-fdf4-4770-ba43-17cb0657e117-combined-ca-bundle\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.442222 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.480328 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ad89b25-5d2b-4562-8d38-8df9e359e8a6","Type":"ContainerStarted","Data":"268a484991a488f38bbe2911e7fab3907dd3a23139df982c2c6e0d0bd010433e"} Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.501475 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mns24\" (UniqueName: \"kubernetes.io/projected/9a1391cd-fdf4-4770-ba43-17cb0657e117-kube-api-access-mns24\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.501595 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a1391cd-fdf4-4770-ba43-17cb0657e117-horizon-secret-key\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.503156 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a1391cd-fdf4-4770-ba43-17cb0657e117-config-data\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.505272 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a1391cd-fdf4-4770-ba43-17cb0657e117-logs\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.505379 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a1391cd-fdf4-4770-ba43-17cb0657e117-combined-ca-bundle\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.505493 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a1391cd-fdf4-4770-ba43-17cb0657e117-horizon-tls-certs\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.505835 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a1391cd-fdf4-4770-ba43-17cb0657e117-logs\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.505299 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a1391cd-fdf4-4770-ba43-17cb0657e117-config-data\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.507634 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a1391cd-fdf4-4770-ba43-17cb0657e117-scripts\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.508456 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a1391cd-fdf4-4770-ba43-17cb0657e117-horizon-secret-key\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.508555 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a1391cd-fdf4-4770-ba43-17cb0657e117-scripts\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.512215 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a1391cd-fdf4-4770-ba43-17cb0657e117-horizon-tls-certs\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.515243 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a1391cd-fdf4-4770-ba43-17cb0657e117-combined-ca-bundle\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.533272 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mns24\" (UniqueName: \"kubernetes.io/projected/9a1391cd-fdf4-4770-ba43-17cb0657e117-kube-api-access-mns24\") pod \"horizon-65775dd4cd-wxtf2\" (UID: \"9a1391cd-fdf4-4770-ba43-17cb0657e117\") " pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:56 crc kubenswrapper[4784]: I0123 06:41:56.545737 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.125208 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.280992 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwlvt\" (UniqueName: \"kubernetes.io/projected/d39d8227-6e54-402b-9f33-fba0f70ba5e9-kube-api-access-nwlvt\") pod \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.281167 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-combined-ca-bundle\") pod \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.281281 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-fernet-keys\") pod \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.281341 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-credential-keys\") pod \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.281393 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-scripts\") pod \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.281446 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-config-data\") pod \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\" (UID: \"d39d8227-6e54-402b-9f33-fba0f70ba5e9\") " Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.287184 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d39d8227-6e54-402b-9f33-fba0f70ba5e9" (UID: "d39d8227-6e54-402b-9f33-fba0f70ba5e9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.288512 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d39d8227-6e54-402b-9f33-fba0f70ba5e9" (UID: "d39d8227-6e54-402b-9f33-fba0f70ba5e9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.288984 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-scripts" (OuterVolumeSpecName: "scripts") pod "d39d8227-6e54-402b-9f33-fba0f70ba5e9" (UID: "d39d8227-6e54-402b-9f33-fba0f70ba5e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.295122 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39d8227-6e54-402b-9f33-fba0f70ba5e9-kube-api-access-nwlvt" (OuterVolumeSpecName: "kube-api-access-nwlvt") pod "d39d8227-6e54-402b-9f33-fba0f70ba5e9" (UID: "d39d8227-6e54-402b-9f33-fba0f70ba5e9"). InnerVolumeSpecName "kube-api-access-nwlvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.320574 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-config-data" (OuterVolumeSpecName: "config-data") pod "d39d8227-6e54-402b-9f33-fba0f70ba5e9" (UID: "d39d8227-6e54-402b-9f33-fba0f70ba5e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.332985 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d39d8227-6e54-402b-9f33-fba0f70ba5e9" (UID: "d39d8227-6e54-402b-9f33-fba0f70ba5e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.385139 4784 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.385184 4784 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.385195 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.385205 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.385216 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwlvt\" (UniqueName: \"kubernetes.io/projected/d39d8227-6e54-402b-9f33-fba0f70ba5e9-kube-api-access-nwlvt\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.385230 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39d8227-6e54-402b-9f33-fba0f70ba5e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.524771 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-prp2g" event={"ID":"d39d8227-6e54-402b-9f33-fba0f70ba5e9","Type":"ContainerDied","Data":"efe9d36d5c4d712ea363a050e27f09af6a7b4c16b3fdc2415d205498609b4ecf"} Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.525013 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efe9d36d5c4d712ea363a050e27f09af6a7b4c16b3fdc2415d205498609b4ecf" Jan 23 06:41:59 crc kubenswrapper[4784]: I0123 06:41:59.525103 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-prp2g" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.287351 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-prp2g"] Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.296422 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-prp2g"] Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.375103 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-j4q25"] Jan 23 06:42:00 crc kubenswrapper[4784]: E0123 06:42:00.375734 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39d8227-6e54-402b-9f33-fba0f70ba5e9" containerName="keystone-bootstrap" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.375764 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39d8227-6e54-402b-9f33-fba0f70ba5e9" containerName="keystone-bootstrap" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.375972 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39d8227-6e54-402b-9f33-fba0f70ba5e9" containerName="keystone-bootstrap" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.376995 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.380433 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.380611 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2zq2z" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.380624 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.380781 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.385658 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.396353 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j4q25"] Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.514490 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-credential-keys\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.514582 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-combined-ca-bundle\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.514785 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm4vc\" (UniqueName: \"kubernetes.io/projected/f9cb908c-22d4-4554-b394-68e4e32793f3-kube-api-access-nm4vc\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.515093 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-fernet-keys\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.515143 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-scripts\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.515292 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-config-data\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.618662 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm4vc\" (UniqueName: \"kubernetes.io/projected/f9cb908c-22d4-4554-b394-68e4e32793f3-kube-api-access-nm4vc\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.618855 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-fernet-keys\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.618884 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-scripts\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.621433 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-config-data\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.621573 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-credential-keys\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.621696 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-combined-ca-bundle\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.627247 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-scripts\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.627475 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-credential-keys\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.627525 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-combined-ca-bundle\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.632763 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-fernet-keys\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.647414 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm4vc\" (UniqueName: \"kubernetes.io/projected/f9cb908c-22d4-4554-b394-68e4e32793f3-kube-api-access-nm4vc\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.648297 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-config-data\") pod \"keystone-bootstrap-j4q25\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:00 crc kubenswrapper[4784]: I0123 06:42:00.720709 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:01 crc kubenswrapper[4784]: I0123 06:42:01.269380 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d39d8227-6e54-402b-9f33-fba0f70ba5e9" path="/var/lib/kubelet/pods/d39d8227-6e54-402b-9f33-fba0f70ba5e9/volumes" Jan 23 06:42:02 crc kubenswrapper[4784]: I0123 06:42:02.152087 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:42:02 crc kubenswrapper[4784]: I0123 06:42:02.229363 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wvmgs"] Jan 23 06:42:02 crc kubenswrapper[4784]: I0123 06:42:02.229785 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-wvmgs" podUID="cd1979d0-9c1b-4625-ba5e-20942e12e569" containerName="dnsmasq-dns" containerID="cri-o://afc82229e0f8c8f306de6a444605332a7e502c7f53ba0c937c7bcd09c3ed8c63" gracePeriod=10 Jan 23 06:42:03 crc kubenswrapper[4784]: I0123 06:42:03.863588 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wvmgs" podUID="cd1979d0-9c1b-4625-ba5e-20942e12e569" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: connect: connection refused" Jan 23 06:42:04 crc kubenswrapper[4784]: I0123 06:42:04.586275 4784 generic.go:334] "Generic (PLEG): container finished" podID="cd1979d0-9c1b-4625-ba5e-20942e12e569" containerID="afc82229e0f8c8f306de6a444605332a7e502c7f53ba0c937c7bcd09c3ed8c63" exitCode=0 Jan 23 06:42:04 crc kubenswrapper[4784]: I0123 06:42:04.586333 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wvmgs" event={"ID":"cd1979d0-9c1b-4625-ba5e-20942e12e569","Type":"ContainerDied","Data":"afc82229e0f8c8f306de6a444605332a7e502c7f53ba0c937c7bcd09c3ed8c63"} Jan 23 06:42:08 crc kubenswrapper[4784]: I0123 06:42:08.863629 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wvmgs" podUID="cd1979d0-9c1b-4625-ba5e-20942e12e569" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: connect: connection refused" Jan 23 06:42:10 crc kubenswrapper[4784]: E0123 06:42:10.327803 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 23 06:42:10 crc kubenswrapper[4784]: E0123 06:42:10.328465 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bh5bfh5c5h5bdhfh5fbhdfh5d7h657h685hddhd6h5ch55ch655h644h684h7fh56bh56dh667h9dh589hdfh599h5bfh77h65h5ddh57fh6fh567q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ms7rx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-66b8497d5c-b9c75_openstack(d4b3eb4e-6408-461a-b330-f95ea1716c9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:42:10 crc kubenswrapper[4784]: E0123 06:42:10.332993 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-66b8497d5c-b9c75" podUID="d4b3eb4e-6408-461a-b330-f95ea1716c9e" Jan 23 06:42:10 crc kubenswrapper[4784]: E0123 06:42:10.354198 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 23 06:42:10 crc kubenswrapper[4784]: E0123 06:42:10.354449 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h695h5f4hc5h6fh75h584h5cbh5bbh5bbhcfh56dhb6h6fh5b9h59fh65dh667h675h585h58bhd7h544h65fhcdh56h565h74h86hcbhbfh5b4q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8c7rf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7d6578c897-tbcz5_openstack(a0f48798-7520-4adb-ac99-0752c9d76303): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:42:10 crc kubenswrapper[4784]: E0123 06:42:10.357028 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7d6578c897-tbcz5" podUID="a0f48798-7520-4adb-ac99-0752c9d76303" Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.191926 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.326390 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-db-sync-config-data\") pod \"ada74437-66bf-4316-a16d-89377a5b5e41\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.326459 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-combined-ca-bundle\") pod \"ada74437-66bf-4316-a16d-89377a5b5e41\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.326506 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-config-data\") pod \"ada74437-66bf-4316-a16d-89377a5b5e41\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.326936 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldrjk\" (UniqueName: \"kubernetes.io/projected/ada74437-66bf-4316-a16d-89377a5b5e41-kube-api-access-ldrjk\") pod \"ada74437-66bf-4316-a16d-89377a5b5e41\" (UID: \"ada74437-66bf-4316-a16d-89377a5b5e41\") " Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.342154 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ada74437-66bf-4316-a16d-89377a5b5e41" (UID: "ada74437-66bf-4316-a16d-89377a5b5e41"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.344467 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ada74437-66bf-4316-a16d-89377a5b5e41-kube-api-access-ldrjk" (OuterVolumeSpecName: "kube-api-access-ldrjk") pod "ada74437-66bf-4316-a16d-89377a5b5e41" (UID: "ada74437-66bf-4316-a16d-89377a5b5e41"). InnerVolumeSpecName "kube-api-access-ldrjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.362945 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ada74437-66bf-4316-a16d-89377a5b5e41" (UID: "ada74437-66bf-4316-a16d-89377a5b5e41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.391493 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-config-data" (OuterVolumeSpecName: "config-data") pod "ada74437-66bf-4316-a16d-89377a5b5e41" (UID: "ada74437-66bf-4316-a16d-89377a5b5e41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.430140 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldrjk\" (UniqueName: \"kubernetes.io/projected/ada74437-66bf-4316-a16d-89377a5b5e41-kube-api-access-ldrjk\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.430205 4784 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.430217 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.430229 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada74437-66bf-4316-a16d-89377a5b5e41-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.685311 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-bkcjh" event={"ID":"ada74437-66bf-4316-a16d-89377a5b5e41","Type":"ContainerDied","Data":"63f381dbc2c33f81a0258f38e5517cb7cd24482284d4daad5e581ab5ec6fe265"} Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.685369 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63f381dbc2c33f81a0258f38e5517cb7cd24482284d4daad5e581ab5ec6fe265" Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.685458 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-bkcjh" Jan 23 06:42:12 crc kubenswrapper[4784]: I0123 06:42:12.688236 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c41c5000-9962-4f05-af14-ded819d94650","Type":"ContainerStarted","Data":"82d4225ddc4f3c597f05c9e299fe03f8bd03e703a98ecdcd1a0ebf4ec19f8ba8"} Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.515055 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:42:13 crc kubenswrapper[4784]: E0123 06:42:13.515675 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ada74437-66bf-4316-a16d-89377a5b5e41" containerName="watcher-db-sync" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.515696 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="ada74437-66bf-4316-a16d-89377a5b5e41" containerName="watcher-db-sync" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.516036 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="ada74437-66bf-4316-a16d-89377a5b5e41" containerName="watcher-db-sync" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.519298 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.522725 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-4htjj" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.523021 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.530346 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.646932 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.649083 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.652550 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.661638 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.661688 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-config-data\") pod \"watcher-decision-engine-0\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.661726 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8efcea72-3c4e-4458-8c0c-0e08a090b037-logs\") pod \"watcher-decision-engine-0\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.661781 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.661815 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94svn\" (UniqueName: \"kubernetes.io/projected/8efcea72-3c4e-4458-8c0c-0e08a090b037-kube-api-access-94svn\") pod \"watcher-decision-engine-0\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.679197 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.680966 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.683891 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.697464 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.772307 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fcnt\" (UniqueName: \"kubernetes.io/projected/1ac1f415-cec7-4110-a87a-9a725a6bf7bb-kube-api-access-8fcnt\") pod \"watcher-applier-0\" (UID: \"1ac1f415-cec7-4110-a87a-9a725a6bf7bb\") " pod="openstack/watcher-applier-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.772417 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.772540 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94svn\" (UniqueName: \"kubernetes.io/projected/8efcea72-3c4e-4458-8c0c-0e08a090b037-kube-api-access-94svn\") pod \"watcher-decision-engine-0\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.775607 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac1f415-cec7-4110-a87a-9a725a6bf7bb-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"1ac1f415-cec7-4110-a87a-9a725a6bf7bb\") " pod="openstack/watcher-applier-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.775723 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-config-data\") pod \"watcher-api-0\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.775864 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.776032 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj85m\" (UniqueName: \"kubernetes.io/projected/7f4508f4-6ead-496d-8449-fe100d604c5b-kube-api-access-rj85m\") pod \"watcher-api-0\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.777853 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac1f415-cec7-4110-a87a-9a725a6bf7bb-config-data\") pod \"watcher-applier-0\" (UID: \"1ac1f415-cec7-4110-a87a-9a725a6bf7bb\") " pod="openstack/watcher-applier-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.777891 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ac1f415-cec7-4110-a87a-9a725a6bf7bb-logs\") pod \"watcher-applier-0\" (UID: \"1ac1f415-cec7-4110-a87a-9a725a6bf7bb\") " pod="openstack/watcher-applier-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.777952 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.778064 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.778100 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-config-data\") pod \"watcher-decision-engine-0\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.778135 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f4508f4-6ead-496d-8449-fe100d604c5b-logs\") pod \"watcher-api-0\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.778186 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8efcea72-3c4e-4458-8c0c-0e08a090b037-logs\") pod \"watcher-decision-engine-0\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.782303 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8efcea72-3c4e-4458-8c0c-0e08a090b037-logs\") pod \"watcher-decision-engine-0\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.782441 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.797202 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-config-data\") pod \"watcher-decision-engine-0\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.797785 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.797821 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94svn\" (UniqueName: \"kubernetes.io/projected/8efcea72-3c4e-4458-8c0c-0e08a090b037-kube-api-access-94svn\") pod \"watcher-decision-engine-0\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.800866 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.850474 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.864522 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wvmgs" podUID="cd1979d0-9c1b-4625-ba5e-20942e12e569" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: connect: connection refused" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.864723 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.885703 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f4508f4-6ead-496d-8449-fe100d604c5b-logs\") pod \"watcher-api-0\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.885846 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fcnt\" (UniqueName: \"kubernetes.io/projected/1ac1f415-cec7-4110-a87a-9a725a6bf7bb-kube-api-access-8fcnt\") pod \"watcher-applier-0\" (UID: \"1ac1f415-cec7-4110-a87a-9a725a6bf7bb\") " pod="openstack/watcher-applier-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.885962 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac1f415-cec7-4110-a87a-9a725a6bf7bb-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"1ac1f415-cec7-4110-a87a-9a725a6bf7bb\") " pod="openstack/watcher-applier-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.886025 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-config-data\") pod \"watcher-api-0\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.886098 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.886206 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj85m\" (UniqueName: \"kubernetes.io/projected/7f4508f4-6ead-496d-8449-fe100d604c5b-kube-api-access-rj85m\") pod \"watcher-api-0\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.886260 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac1f415-cec7-4110-a87a-9a725a6bf7bb-config-data\") pod \"watcher-applier-0\" (UID: \"1ac1f415-cec7-4110-a87a-9a725a6bf7bb\") " pod="openstack/watcher-applier-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.886279 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ac1f415-cec7-4110-a87a-9a725a6bf7bb-logs\") pod \"watcher-applier-0\" (UID: \"1ac1f415-cec7-4110-a87a-9a725a6bf7bb\") " pod="openstack/watcher-applier-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.886324 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.889222 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ac1f415-cec7-4110-a87a-9a725a6bf7bb-logs\") pod \"watcher-applier-0\" (UID: \"1ac1f415-cec7-4110-a87a-9a725a6bf7bb\") " pod="openstack/watcher-applier-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.890347 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f4508f4-6ead-496d-8449-fe100d604c5b-logs\") pod \"watcher-api-0\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.891529 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.898365 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-config-data\") pod \"watcher-api-0\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.913360 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fcnt\" (UniqueName: \"kubernetes.io/projected/1ac1f415-cec7-4110-a87a-9a725a6bf7bb-kube-api-access-8fcnt\") pod \"watcher-applier-0\" (UID: \"1ac1f415-cec7-4110-a87a-9a725a6bf7bb\") " pod="openstack/watcher-applier-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.913948 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.914430 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ac1f415-cec7-4110-a87a-9a725a6bf7bb-config-data\") pod \"watcher-applier-0\" (UID: \"1ac1f415-cec7-4110-a87a-9a725a6bf7bb\") " pod="openstack/watcher-applier-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.915643 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj85m\" (UniqueName: \"kubernetes.io/projected/7f4508f4-6ead-496d-8449-fe100d604c5b-kube-api-access-rj85m\") pod \"watcher-api-0\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " pod="openstack/watcher-api-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.918687 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac1f415-cec7-4110-a87a-9a725a6bf7bb-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"1ac1f415-cec7-4110-a87a-9a725a6bf7bb\") " pod="openstack/watcher-applier-0" Jan 23 06:42:13 crc kubenswrapper[4784]: I0123 06:42:13.977330 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:42:14 crc kubenswrapper[4784]: I0123 06:42:14.057319 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 23 06:42:18 crc kubenswrapper[4784]: I0123 06:42:18.863933 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-wvmgs" podUID="cd1979d0-9c1b-4625-ba5e-20942e12e569" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: connect: connection refused" Jan 23 06:42:20 crc kubenswrapper[4784]: E0123 06:42:20.339424 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 23 06:42:20 crc kubenswrapper[4784]: E0123 06:42:20.340227 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56fhbh5bhdbhdfhb5h567h5bh5dh7dhcfh5bh695h545h577h66dhf4hd9h86h599h56fh598h5dfh655h9h99h679h64fh5dchd9hdh76q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vppvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7c4886c77c-bjbwq_openstack(9ea4ea5a-c921-4c5e-8450-c3311c24ae27): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:42:20 crc kubenswrapper[4784]: E0123 06:42:20.343729 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7c4886c77c-bjbwq" podUID="9ea4ea5a-c921-4c5e-8450-c3311c24ae27" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.445585 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.455111 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.460528 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0f48798-7520-4adb-ac99-0752c9d76303-config-data\") pod \"a0f48798-7520-4adb-ac99-0752c9d76303\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.460626 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0f48798-7520-4adb-ac99-0752c9d76303-logs\") pod \"a0f48798-7520-4adb-ac99-0752c9d76303\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.460679 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a0f48798-7520-4adb-ac99-0752c9d76303-horizon-secret-key\") pod \"a0f48798-7520-4adb-ac99-0752c9d76303\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.460865 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c7rf\" (UniqueName: \"kubernetes.io/projected/a0f48798-7520-4adb-ac99-0752c9d76303-kube-api-access-8c7rf\") pod \"a0f48798-7520-4adb-ac99-0752c9d76303\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.460994 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0f48798-7520-4adb-ac99-0752c9d76303-scripts\") pod \"a0f48798-7520-4adb-ac99-0752c9d76303\" (UID: \"a0f48798-7520-4adb-ac99-0752c9d76303\") " Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.461220 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0f48798-7520-4adb-ac99-0752c9d76303-logs" (OuterVolumeSpecName: "logs") pod "a0f48798-7520-4adb-ac99-0752c9d76303" (UID: "a0f48798-7520-4adb-ac99-0752c9d76303"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.461567 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0f48798-7520-4adb-ac99-0752c9d76303-config-data" (OuterVolumeSpecName: "config-data") pod "a0f48798-7520-4adb-ac99-0752c9d76303" (UID: "a0f48798-7520-4adb-ac99-0752c9d76303"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.461890 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0f48798-7520-4adb-ac99-0752c9d76303-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.461924 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0f48798-7520-4adb-ac99-0752c9d76303-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.462306 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0f48798-7520-4adb-ac99-0752c9d76303-scripts" (OuterVolumeSpecName: "scripts") pod "a0f48798-7520-4adb-ac99-0752c9d76303" (UID: "a0f48798-7520-4adb-ac99-0752c9d76303"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.469847 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0f48798-7520-4adb-ac99-0752c9d76303-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a0f48798-7520-4adb-ac99-0752c9d76303" (UID: "a0f48798-7520-4adb-ac99-0752c9d76303"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.470565 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0f48798-7520-4adb-ac99-0752c9d76303-kube-api-access-8c7rf" (OuterVolumeSpecName: "kube-api-access-8c7rf") pod "a0f48798-7520-4adb-ac99-0752c9d76303" (UID: "a0f48798-7520-4adb-ac99-0752c9d76303"). InnerVolumeSpecName "kube-api-access-8c7rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.564711 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4b3eb4e-6408-461a-b330-f95ea1716c9e-horizon-secret-key\") pod \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.564814 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms7rx\" (UniqueName: \"kubernetes.io/projected/d4b3eb4e-6408-461a-b330-f95ea1716c9e-kube-api-access-ms7rx\") pod \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.565056 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4b3eb4e-6408-461a-b330-f95ea1716c9e-scripts\") pod \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.565157 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4b3eb4e-6408-461a-b330-f95ea1716c9e-config-data\") pod \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.565417 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4b3eb4e-6408-461a-b330-f95ea1716c9e-logs\") pod \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\" (UID: \"d4b3eb4e-6408-461a-b330-f95ea1716c9e\") " Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.566096 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c7rf\" (UniqueName: \"kubernetes.io/projected/a0f48798-7520-4adb-ac99-0752c9d76303-kube-api-access-8c7rf\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.567990 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0f48798-7520-4adb-ac99-0752c9d76303-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.568083 4784 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a0f48798-7520-4adb-ac99-0752c9d76303-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.568905 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b3eb4e-6408-461a-b330-f95ea1716c9e-logs" (OuterVolumeSpecName: "logs") pod "d4b3eb4e-6408-461a-b330-f95ea1716c9e" (UID: "d4b3eb4e-6408-461a-b330-f95ea1716c9e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.569204 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4b3eb4e-6408-461a-b330-f95ea1716c9e-scripts" (OuterVolumeSpecName: "scripts") pod "d4b3eb4e-6408-461a-b330-f95ea1716c9e" (UID: "d4b3eb4e-6408-461a-b330-f95ea1716c9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.569871 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4b3eb4e-6408-461a-b330-f95ea1716c9e-config-data" (OuterVolumeSpecName: "config-data") pod "d4b3eb4e-6408-461a-b330-f95ea1716c9e" (UID: "d4b3eb4e-6408-461a-b330-f95ea1716c9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.573267 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b3eb4e-6408-461a-b330-f95ea1716c9e-kube-api-access-ms7rx" (OuterVolumeSpecName: "kube-api-access-ms7rx") pod "d4b3eb4e-6408-461a-b330-f95ea1716c9e" (UID: "d4b3eb4e-6408-461a-b330-f95ea1716c9e"). InnerVolumeSpecName "kube-api-access-ms7rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.578805 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b3eb4e-6408-461a-b330-f95ea1716c9e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d4b3eb4e-6408-461a-b330-f95ea1716c9e" (UID: "d4b3eb4e-6408-461a-b330-f95ea1716c9e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.675779 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms7rx\" (UniqueName: \"kubernetes.io/projected/d4b3eb4e-6408-461a-b330-f95ea1716c9e-kube-api-access-ms7rx\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.675833 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4b3eb4e-6408-461a-b330-f95ea1716c9e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.675847 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4b3eb4e-6408-461a-b330-f95ea1716c9e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.675859 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4b3eb4e-6408-461a-b330-f95ea1716c9e-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.675871 4784 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4b3eb4e-6408-461a-b330-f95ea1716c9e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.839585 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b8497d5c-b9c75" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.839586 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66b8497d5c-b9c75" event={"ID":"d4b3eb4e-6408-461a-b330-f95ea1716c9e","Type":"ContainerDied","Data":"dc7b0fae0c78d89f3084c7a6aff0d797165522c8ec69c6f7cec4e7338e2650d7"} Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.844510 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6578c897-tbcz5" Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.844935 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d6578c897-tbcz5" event={"ID":"a0f48798-7520-4adb-ac99-0752c9d76303","Type":"ContainerDied","Data":"4ef0023a488285afc4ced227f2a51f0b4848af2915177996256957fda59e4db7"} Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.940560 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d6578c897-tbcz5"] Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.957525 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7d6578c897-tbcz5"] Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.982459 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66b8497d5c-b9c75"] Jan 23 06:42:20 crc kubenswrapper[4784]: I0123 06:42:20.992836 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66b8497d5c-b9c75"] Jan 23 06:42:21 crc kubenswrapper[4784]: E0123 06:42:21.159841 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 23 06:42:21 crc kubenswrapper[4784]: E0123 06:42:21.160120 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dqmjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-g49wt_openstack(d192b60c-bc41-4f7d-9c61-2748ad0f8a7f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:42:21 crc kubenswrapper[4784]: E0123 06:42:21.161377 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-g49wt" podUID="d192b60c-bc41-4f7d-9c61-2748ad0f8a7f" Jan 23 06:42:21 crc kubenswrapper[4784]: I0123 06:42:21.279885 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0f48798-7520-4adb-ac99-0752c9d76303" path="/var/lib/kubelet/pods/a0f48798-7520-4adb-ac99-0752c9d76303/volumes" Jan 23 06:42:21 crc kubenswrapper[4784]: I0123 06:42:21.280490 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b3eb4e-6408-461a-b330-f95ea1716c9e" path="/var/lib/kubelet/pods/d4b3eb4e-6408-461a-b330-f95ea1716c9e/volumes" Jan 23 06:42:21 crc kubenswrapper[4784]: E0123 06:42:21.651163 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 23 06:42:21 crc kubenswrapper[4784]: E0123 06:42:21.651735 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd6h87h54bhc8h6dh596h5d8hffh68h598h9fh699h68dh79hc9h54h658h55fh69h5bdh576h585hf6h5d5h58fhd4hfdh555h85h5c9h7h64dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kmrfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3a78b6d3-fcc8-4cc3-a549-c0ba13460333): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:42:21 crc kubenswrapper[4784]: I0123 06:42:21.657776 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 06:42:21 crc kubenswrapper[4784]: I0123 06:42:21.858557 4784 generic.go:334] "Generic (PLEG): container finished" podID="e5d8e7e9-165a-4248-a591-e47f1313c8d0" containerID="ed9b5a18514a804502fc6eb516d4ee9fd16d688e0d6471302220d67e35cab39f" exitCode=0 Jan 23 06:42:21 crc kubenswrapper[4784]: I0123 06:42:21.858661 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cf94j" event={"ID":"e5d8e7e9-165a-4248-a591-e47f1313c8d0","Type":"ContainerDied","Data":"ed9b5a18514a804502fc6eb516d4ee9fd16d688e0d6471302220d67e35cab39f"} Jan 23 06:42:21 crc kubenswrapper[4784]: E0123 06:42:21.861743 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-g49wt" podUID="d192b60c-bc41-4f7d-9c61-2748ad0f8a7f" Jan 23 06:42:22 crc kubenswrapper[4784]: E0123 06:42:22.823649 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 23 06:42:22 crc kubenswrapper[4784]: E0123 06:42:22.824692 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knvfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-tvpzc_openstack(e52f206e-7230-4c60-a8c2-ad6cebabc434): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:42:22 crc kubenswrapper[4784]: E0123 06:42:22.825912 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-tvpzc" podUID="e52f206e-7230-4c60-a8c2-ad6cebabc434" Jan 23 06:42:22 crc kubenswrapper[4784]: I0123 06:42:22.895263 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wvmgs" event={"ID":"cd1979d0-9c1b-4625-ba5e-20942e12e569","Type":"ContainerDied","Data":"4626c85196a6f22b28d0794a32a1ee7fadfcf5286d8f2c122cca4d294a406abf"} Jan 23 06:42:22 crc kubenswrapper[4784]: I0123 06:42:22.895313 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4626c85196a6f22b28d0794a32a1ee7fadfcf5286d8f2c122cca4d294a406abf" Jan 23 06:42:22 crc kubenswrapper[4784]: E0123 06:42:22.897528 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-tvpzc" podUID="e52f206e-7230-4c60-a8c2-ad6cebabc434" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.118382 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.141429 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-287zb\" (UniqueName: \"kubernetes.io/projected/cd1979d0-9c1b-4625-ba5e-20942e12e569-kube-api-access-287zb\") pod \"cd1979d0-9c1b-4625-ba5e-20942e12e569\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.141505 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-config\") pod \"cd1979d0-9c1b-4625-ba5e-20942e12e569\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.141961 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-ovsdbserver-nb\") pod \"cd1979d0-9c1b-4625-ba5e-20942e12e569\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.141992 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-ovsdbserver-sb\") pod \"cd1979d0-9c1b-4625-ba5e-20942e12e569\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.142158 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-dns-svc\") pod \"cd1979d0-9c1b-4625-ba5e-20942e12e569\" (UID: \"cd1979d0-9c1b-4625-ba5e-20942e12e569\") " Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.163371 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd1979d0-9c1b-4625-ba5e-20942e12e569-kube-api-access-287zb" (OuterVolumeSpecName: "kube-api-access-287zb") pod "cd1979d0-9c1b-4625-ba5e-20942e12e569" (UID: "cd1979d0-9c1b-4625-ba5e-20942e12e569"). InnerVolumeSpecName "kube-api-access-287zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.217837 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd1979d0-9c1b-4625-ba5e-20942e12e569" (UID: "cd1979d0-9c1b-4625-ba5e-20942e12e569"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.241651 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.248905 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-287zb\" (UniqueName: \"kubernetes.io/projected/cd1979d0-9c1b-4625-ba5e-20942e12e569-kube-api-access-287zb\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.248962 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.279341 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cd1979d0-9c1b-4625-ba5e-20942e12e569" (UID: "cd1979d0-9c1b-4625-ba5e-20942e12e569"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.324868 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-config" (OuterVolumeSpecName: "config") pod "cd1979d0-9c1b-4625-ba5e-20942e12e569" (UID: "cd1979d0-9c1b-4625-ba5e-20942e12e569"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.330702 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd1979d0-9c1b-4625-ba5e-20942e12e569" (UID: "cd1979d0-9c1b-4625-ba5e-20942e12e569"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.350460 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vppvv\" (UniqueName: \"kubernetes.io/projected/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-kube-api-access-vppvv\") pod \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.350521 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-horizon-secret-key\") pod \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.350549 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-logs\") pod \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.350693 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-scripts\") pod \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.350789 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-config-data\") pod \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\" (UID: \"9ea4ea5a-c921-4c5e-8450-c3311c24ae27\") " Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.351423 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.351442 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.351461 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd1979d0-9c1b-4625-ba5e-20942e12e569-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.352335 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-config-data" (OuterVolumeSpecName: "config-data") pod "9ea4ea5a-c921-4c5e-8450-c3311c24ae27" (UID: "9ea4ea5a-c921-4c5e-8450-c3311c24ae27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.352924 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-logs" (OuterVolumeSpecName: "logs") pod "9ea4ea5a-c921-4c5e-8450-c3311c24ae27" (UID: "9ea4ea5a-c921-4c5e-8450-c3311c24ae27"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.353259 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-scripts" (OuterVolumeSpecName: "scripts") pod "9ea4ea5a-c921-4c5e-8450-c3311c24ae27" (UID: "9ea4ea5a-c921-4c5e-8450-c3311c24ae27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.398041 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9ea4ea5a-c921-4c5e-8450-c3311c24ae27" (UID: "9ea4ea5a-c921-4c5e-8450-c3311c24ae27"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.410008 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-kube-api-access-vppvv" (OuterVolumeSpecName: "kube-api-access-vppvv") pod "9ea4ea5a-c921-4c5e-8450-c3311c24ae27" (UID: "9ea4ea5a-c921-4c5e-8450-c3311c24ae27"). InnerVolumeSpecName "kube-api-access-vppvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.454137 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vppvv\" (UniqueName: \"kubernetes.io/projected/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-kube-api-access-vppvv\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.454458 4784 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.454561 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.454647 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.454717 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ea4ea5a-c921-4c5e-8450-c3311c24ae27-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.603240 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.603323 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.826090 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cf94j" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.883995 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm8v7\" (UniqueName: \"kubernetes.io/projected/e5d8e7e9-165a-4248-a591-e47f1313c8d0-kube-api-access-vm8v7\") pod \"e5d8e7e9-165a-4248-a591-e47f1313c8d0\" (UID: \"e5d8e7e9-165a-4248-a591-e47f1313c8d0\") " Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.884102 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5d8e7e9-165a-4248-a591-e47f1313c8d0-config\") pod \"e5d8e7e9-165a-4248-a591-e47f1313c8d0\" (UID: \"e5d8e7e9-165a-4248-a591-e47f1313c8d0\") " Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.884378 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d8e7e9-165a-4248-a591-e47f1313c8d0-combined-ca-bundle\") pod \"e5d8e7e9-165a-4248-a591-e47f1313c8d0\" (UID: \"e5d8e7e9-165a-4248-a591-e47f1313c8d0\") " Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.893244 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d8e7e9-165a-4248-a591-e47f1313c8d0-kube-api-access-vm8v7" (OuterVolumeSpecName: "kube-api-access-vm8v7") pod "e5d8e7e9-165a-4248-a591-e47f1313c8d0" (UID: "e5d8e7e9-165a-4248-a591-e47f1313c8d0"). InnerVolumeSpecName "kube-api-access-vm8v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.918293 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4ad89b25-5d2b-4562-8d38-8df9e359e8a6" containerName="glance-log" containerID="cri-o://268a484991a488f38bbe2911e7fab3907dd3a23139df982c2c6e0d0bd010433e" gracePeriod=30 Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.918666 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ad89b25-5d2b-4562-8d38-8df9e359e8a6","Type":"ContainerStarted","Data":"030a649e42f4db95eb640ca25ae7a6f41bf7d73bfdbfbafee2db161946ba964a"} Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.919213 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4ad89b25-5d2b-4562-8d38-8df9e359e8a6" containerName="glance-httpd" containerID="cri-o://030a649e42f4db95eb640ca25ae7a6f41bf7d73bfdbfbafee2db161946ba964a" gracePeriod=30 Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.926957 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5d8e7e9-165a-4248-a591-e47f1313c8d0-config" (OuterVolumeSpecName: "config") pod "e5d8e7e9-165a-4248-a591-e47f1313c8d0" (UID: "e5d8e7e9-165a-4248-a591-e47f1313c8d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.953151 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pzpcf" event={"ID":"d58b6a2a-7217-4621-8e1a-c8297e74a086","Type":"ContainerStarted","Data":"e68ae9ac235abbb2055e0aa5afb1be12d5913a81f097b80e4aeedf307562d8f8"} Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.954454 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=32.954420637 podStartE2EDuration="32.954420637s" podCreationTimestamp="2026-01-23 06:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:42:23.952981292 +0000 UTC m=+1347.185489266" watchObservedRunningTime="2026-01-23 06:42:23.954420637 +0000 UTC m=+1347.186928611" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.957802 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5d8e7e9-165a-4248-a591-e47f1313c8d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5d8e7e9-165a-4248-a591-e47f1313c8d0" (UID: "e5d8e7e9-165a-4248-a591-e47f1313c8d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.957995 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cf94j" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.958030 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cf94j" event={"ID":"e5d8e7e9-165a-4248-a591-e47f1313c8d0","Type":"ContainerDied","Data":"108a33da46786bec17e6998ce767a90b056b0dd1a4ebfec2e1047988ef9d63c4"} Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.958092 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="108a33da46786bec17e6998ce767a90b056b0dd1a4ebfec2e1047988ef9d63c4" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.959863 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wvmgs" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.960843 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c4886c77c-bjbwq" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.960843 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c4886c77c-bjbwq" event={"ID":"9ea4ea5a-c921-4c5e-8450-c3311c24ae27","Type":"ContainerDied","Data":"7267d8a4f95728a455a638bb5d8d14feb934b279f1178f3fcfe6accefa9422a1"} Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.988869 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d8e7e9-165a-4248-a591-e47f1313c8d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.988926 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm8v7\" (UniqueName: \"kubernetes.io/projected/e5d8e7e9-165a-4248-a591-e47f1313c8d0-kube-api-access-vm8v7\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.988940 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5d8e7e9-165a-4248-a591-e47f1313c8d0-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:23 crc kubenswrapper[4784]: I0123 06:42:23.991064 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-pzpcf" podStartSLOduration=3.989536211 podStartE2EDuration="38.991039407s" podCreationTimestamp="2026-01-23 06:41:45 +0000 UTC" firstStartedPulling="2026-01-23 06:41:47.827559828 +0000 UTC m=+1311.060067802" lastFinishedPulling="2026-01-23 06:42:22.829063014 +0000 UTC m=+1346.061570998" observedRunningTime="2026-01-23 06:42:23.981483702 +0000 UTC m=+1347.213991676" watchObservedRunningTime="2026-01-23 06:42:23.991039407 +0000 UTC m=+1347.223547381" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.119927 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fpd8v"] Jan 23 06:42:24 crc kubenswrapper[4784]: E0123 06:42:24.120392 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd1979d0-9c1b-4625-ba5e-20942e12e569" containerName="dnsmasq-dns" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.120417 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd1979d0-9c1b-4625-ba5e-20942e12e569" containerName="dnsmasq-dns" Jan 23 06:42:24 crc kubenswrapper[4784]: E0123 06:42:24.120445 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd1979d0-9c1b-4625-ba5e-20942e12e569" containerName="init" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.120453 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd1979d0-9c1b-4625-ba5e-20942e12e569" containerName="init" Jan 23 06:42:24 crc kubenswrapper[4784]: E0123 06:42:24.120478 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d8e7e9-165a-4248-a591-e47f1313c8d0" containerName="neutron-db-sync" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.120485 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d8e7e9-165a-4248-a591-e47f1313c8d0" containerName="neutron-db-sync" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.120905 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd1979d0-9c1b-4625-ba5e-20942e12e569" containerName="dnsmasq-dns" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.120929 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d8e7e9-165a-4248-a591-e47f1313c8d0" containerName="neutron-db-sync" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.122129 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.158180 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fpd8v"] Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.197768 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-dns-svc\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.197839 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.197881 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.198616 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.198744 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-config\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.198877 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v8c6\" (UniqueName: \"kubernetes.io/projected/239c12d0-5821-4bcc-9b6e-b90a896731cd-kube-api-access-6v8c6\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.199905 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c4745df56-9q499"] Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.220649 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.225002 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.225223 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bslv4" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.225436 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.227283 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.251455 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c4745df56-9q499"] Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.274941 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wvmgs"] Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.289626 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wvmgs"] Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.300726 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-httpd-config\") pod \"neutron-7c4745df56-9q499\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.300820 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-config\") pod \"neutron-7c4745df56-9q499\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.300873 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcgp6\" (UniqueName: \"kubernetes.io/projected/0704d33b-825f-40fb-8c88-5fbb26b6994e-kube-api-access-dcgp6\") pod \"neutron-7c4745df56-9q499\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.300919 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.301427 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-config\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.301571 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-ovndb-tls-certs\") pod \"neutron-7c4745df56-9q499\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.301646 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v8c6\" (UniqueName: \"kubernetes.io/projected/239c12d0-5821-4bcc-9b6e-b90a896731cd-kube-api-access-6v8c6\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.301780 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-combined-ca-bundle\") pod \"neutron-7c4745df56-9q499\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.301826 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-dns-svc\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.301877 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.301947 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.302648 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-config\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.302811 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-dns-svc\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.303485 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.303640 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.303802 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.318186 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c4886c77c-bjbwq"] Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.331193 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v8c6\" (UniqueName: \"kubernetes.io/projected/239c12d0-5821-4bcc-9b6e-b90a896731cd-kube-api-access-6v8c6\") pod \"dnsmasq-dns-55f844cf75-fpd8v\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.340519 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7c4886c77c-bjbwq"] Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.405356 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-httpd-config\") pod \"neutron-7c4745df56-9q499\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.405441 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-config\") pod \"neutron-7c4745df56-9q499\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.405491 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcgp6\" (UniqueName: \"kubernetes.io/projected/0704d33b-825f-40fb-8c88-5fbb26b6994e-kube-api-access-dcgp6\") pod \"neutron-7c4745df56-9q499\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.405560 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-ovndb-tls-certs\") pod \"neutron-7c4745df56-9q499\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.405605 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-combined-ca-bundle\") pod \"neutron-7c4745df56-9q499\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.411493 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-httpd-config\") pod \"neutron-7c4745df56-9q499\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.413893 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-config\") pod \"neutron-7c4745df56-9q499\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.416051 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-ovndb-tls-certs\") pod \"neutron-7c4745df56-9q499\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.423848 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcgp6\" (UniqueName: \"kubernetes.io/projected/0704d33b-825f-40fb-8c88-5fbb26b6994e-kube-api-access-dcgp6\") pod \"neutron-7c4745df56-9q499\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.429299 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-combined-ca-bundle\") pod \"neutron-7c4745df56-9q499\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.481326 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.523914 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79d47d6854-hfx9p"] Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.542528 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.551251 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.560555 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.570982 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.596621 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65775dd4cd-wxtf2"] Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.616326 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j4q25"] Jan 23 06:42:24 crc kubenswrapper[4784]: W0123 06:42:24.958781 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d31d380_7e87_4ce6_bbfe_5f3788456978.slice/crio-f23f4afd6f8b4d148b28d5d4cad8bc872ad030747cbba30816101fd6d3133005 WatchSource:0}: Error finding container f23f4afd6f8b4d148b28d5d4cad8bc872ad030747cbba30816101fd6d3133005: Status 404 returned error can't find the container with id f23f4afd6f8b4d148b28d5d4cad8bc872ad030747cbba30816101fd6d3133005 Jan 23 06:42:24 crc kubenswrapper[4784]: W0123 06:42:24.961376 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ac1f415_cec7_4110_a87a_9a725a6bf7bb.slice/crio-bd7fedd8164fc697941b2379cead07c72e9e430a1b0d026c29e0368a1b78d3ad WatchSource:0}: Error finding container bd7fedd8164fc697941b2379cead07c72e9e430a1b0d026c29e0368a1b78d3ad: Status 404 returned error can't find the container with id bd7fedd8164fc697941b2379cead07c72e9e430a1b0d026c29e0368a1b78d3ad Jan 23 06:42:24 crc kubenswrapper[4784]: W0123 06:42:24.972236 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8efcea72_3c4e_4458_8c0c_0e08a090b037.slice/crio-8ccbddf59c8d8e74b04ada29869e4b1d0ea87cd27750cb3fde999c02d7fbc59e WatchSource:0}: Error finding container 8ccbddf59c8d8e74b04ada29869e4b1d0ea87cd27750cb3fde999c02d7fbc59e: Status 404 returned error can't find the container with id 8ccbddf59c8d8e74b04ada29869e4b1d0ea87cd27750cb3fde999c02d7fbc59e Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.997305 4784 generic.go:334] "Generic (PLEG): container finished" podID="4ad89b25-5d2b-4562-8d38-8df9e359e8a6" containerID="030a649e42f4db95eb640ca25ae7a6f41bf7d73bfdbfbafee2db161946ba964a" exitCode=143 Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.997351 4784 generic.go:334] "Generic (PLEG): container finished" podID="4ad89b25-5d2b-4562-8d38-8df9e359e8a6" containerID="268a484991a488f38bbe2911e7fab3907dd3a23139df982c2c6e0d0bd010433e" exitCode=143 Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.997428 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ad89b25-5d2b-4562-8d38-8df9e359e8a6","Type":"ContainerDied","Data":"030a649e42f4db95eb640ca25ae7a6f41bf7d73bfdbfbafee2db161946ba964a"} Jan 23 06:42:24 crc kubenswrapper[4784]: I0123 06:42:24.997470 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ad89b25-5d2b-4562-8d38-8df9e359e8a6","Type":"ContainerDied","Data":"268a484991a488f38bbe2911e7fab3907dd3a23139df982c2c6e0d0bd010433e"} Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.002992 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65775dd4cd-wxtf2" event={"ID":"9a1391cd-fdf4-4770-ba43-17cb0657e117","Type":"ContainerStarted","Data":"32b6a98c2c40ccb2f94e4f98ac2e679bf4bda237624e2a10f0dbcbbc6fd48aa7"} Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.008124 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7f4508f4-6ead-496d-8449-fe100d604c5b","Type":"ContainerStarted","Data":"283b7455fa8e9a7c501aefbbd50131af6e2c0035e63ea067a8143f765d3308f2"} Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.011475 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79d47d6854-hfx9p" event={"ID":"8d31d380-7e87-4ce6-bbfe-5f3788456978","Type":"ContainerStarted","Data":"f23f4afd6f8b4d148b28d5d4cad8bc872ad030747cbba30816101fd6d3133005"} Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.014112 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"1ac1f415-cec7-4110-a87a-9a725a6bf7bb","Type":"ContainerStarted","Data":"bd7fedd8164fc697941b2379cead07c72e9e430a1b0d026c29e0368a1b78d3ad"} Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.020389 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c41c5000-9962-4f05-af14-ded819d94650","Type":"ContainerStarted","Data":"ccbb4f9d09d24a456c734a4b22f3bd59cf078a7c1ac4b71d42e6f47690d77c1f"} Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.280424 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ea4ea5a-c921-4c5e-8450-c3311c24ae27" path="/var/lib/kubelet/pods/9ea4ea5a-c921-4c5e-8450-c3311c24ae27/volumes" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.281384 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd1979d0-9c1b-4625-ba5e-20942e12e569" path="/var/lib/kubelet/pods/cd1979d0-9c1b-4625-ba5e-20942e12e569/volumes" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.573878 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.649502 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-httpd-run\") pod \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.649601 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-combined-ca-bundle\") pod \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.649786 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7t4t\" (UniqueName: \"kubernetes.io/projected/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-kube-api-access-z7t4t\") pod \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.649839 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.649890 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-scripts\") pod \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.649931 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-logs\") pod \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.650064 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-config-data\") pod \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\" (UID: \"4ad89b25-5d2b-4562-8d38-8df9e359e8a6\") " Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.650374 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4ad89b25-5d2b-4562-8d38-8df9e359e8a6" (UID: "4ad89b25-5d2b-4562-8d38-8df9e359e8a6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.651220 4784 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.651518 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-logs" (OuterVolumeSpecName: "logs") pod "4ad89b25-5d2b-4562-8d38-8df9e359e8a6" (UID: "4ad89b25-5d2b-4562-8d38-8df9e359e8a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.655353 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-kube-api-access-z7t4t" (OuterVolumeSpecName: "kube-api-access-z7t4t") pod "4ad89b25-5d2b-4562-8d38-8df9e359e8a6" (UID: "4ad89b25-5d2b-4562-8d38-8df9e359e8a6"). InnerVolumeSpecName "kube-api-access-z7t4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.661897 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-scripts" (OuterVolumeSpecName: "scripts") pod "4ad89b25-5d2b-4562-8d38-8df9e359e8a6" (UID: "4ad89b25-5d2b-4562-8d38-8df9e359e8a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.661934 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "4ad89b25-5d2b-4562-8d38-8df9e359e8a6" (UID: "4ad89b25-5d2b-4562-8d38-8df9e359e8a6"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.694206 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ad89b25-5d2b-4562-8d38-8df9e359e8a6" (UID: "4ad89b25-5d2b-4562-8d38-8df9e359e8a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.754500 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7t4t\" (UniqueName: \"kubernetes.io/projected/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-kube-api-access-z7t4t\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.754556 4784 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.754567 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.754577 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.754587 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.774552 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fpd8v"] Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.780882 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-config-data" (OuterVolumeSpecName: "config-data") pod "4ad89b25-5d2b-4562-8d38-8df9e359e8a6" (UID: "4ad89b25-5d2b-4562-8d38-8df9e359e8a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.805043 4784 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 23 06:42:25 crc kubenswrapper[4784]: W0123 06:42:25.827638 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod239c12d0_5821_4bcc_9b6e_b90a896731cd.slice/crio-9e01e1e01cf491676ef6855c4589a650f5fddb67f5aaa8ce85a70d83a9af6c22 WatchSource:0}: Error finding container 9e01e1e01cf491676ef6855c4589a650f5fddb67f5aaa8ce85a70d83a9af6c22: Status 404 returned error can't find the container with id 9e01e1e01cf491676ef6855c4589a650f5fddb67f5aaa8ce85a70d83a9af6c22 Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.862821 4784 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.862862 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad89b25-5d2b-4562-8d38-8df9e359e8a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:25 crc kubenswrapper[4784]: W0123 06:42:25.874880 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0704d33b_825f_40fb_8c88_5fbb26b6994e.slice/crio-dcf10b67794268be8e8f554ab6892003466d4ad466299d10bcb8d89252b39eba WatchSource:0}: Error finding container dcf10b67794268be8e8f554ab6892003466d4ad466299d10bcb8d89252b39eba: Status 404 returned error can't find the container with id dcf10b67794268be8e8f554ab6892003466d4ad466299d10bcb8d89252b39eba Jan 23 06:42:25 crc kubenswrapper[4784]: I0123 06:42:25.897340 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c4745df56-9q499"] Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.050968 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c4745df56-9q499" event={"ID":"0704d33b-825f-40fb-8c88-5fbb26b6994e","Type":"ContainerStarted","Data":"dcf10b67794268be8e8f554ab6892003466d4ad466299d10bcb8d89252b39eba"} Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.058663 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a78b6d3-fcc8-4cc3-a549-c0ba13460333","Type":"ContainerStarted","Data":"115347ccb5cd70bbf37dcefd06282d0840c29164d97d83ffad76d50c522ddca9"} Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.103953 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4ad89b25-5d2b-4562-8d38-8df9e359e8a6","Type":"ContainerDied","Data":"e1e039beceff8a6f17bfc8490dca73b2306506925204cf145dc8d9884bbdcdb8"} Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.104322 4784 scope.go:117] "RemoveContainer" containerID="030a649e42f4db95eb640ca25ae7a6f41bf7d73bfdbfbafee2db161946ba964a" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.104130 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.122802 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j4q25" event={"ID":"f9cb908c-22d4-4554-b394-68e4e32793f3","Type":"ContainerStarted","Data":"d50e7c0e88ae98b6a02097aa02bd8ed3b1d22b945cc906f3d3700e2aec4afc9f"} Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.122857 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j4q25" event={"ID":"f9cb908c-22d4-4554-b394-68e4e32793f3","Type":"ContainerStarted","Data":"b6c348d011c0c9272f6429b70d45029401eb6d0c48c2e001edca15cc506f900f"} Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.147171 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7f4508f4-6ead-496d-8449-fe100d604c5b","Type":"ContainerStarted","Data":"d293815799e9b399b38036a0f17bcc01c4febc648b0dba368a341772d3552ad4"} Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.147234 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7f4508f4-6ead-496d-8449-fe100d604c5b","Type":"ContainerStarted","Data":"ff3f52ccf574c9ce433073d22a3669808e3eae1d89486bd36e4c8f65e174add6"} Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.149285 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.164983 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8efcea72-3c4e-4458-8c0c-0e08a090b037","Type":"ContainerStarted","Data":"8ccbddf59c8d8e74b04ada29869e4b1d0ea87cd27750cb3fde999c02d7fbc59e"} Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.175533 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" event={"ID":"239c12d0-5821-4bcc-9b6e-b90a896731cd","Type":"ContainerStarted","Data":"9e01e1e01cf491676ef6855c4589a650f5fddb67f5aaa8ce85a70d83a9af6c22"} Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.191877 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-j4q25" podStartSLOduration=26.191841496 podStartE2EDuration="26.191841496s" podCreationTimestamp="2026-01-23 06:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:42:26.156430606 +0000 UTC m=+1349.388938580" watchObservedRunningTime="2026-01-23 06:42:26.191841496 +0000 UTC m=+1349.424349470" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.229369 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.253226 4784 scope.go:117] "RemoveContainer" containerID="268a484991a488f38bbe2911e7fab3907dd3a23139df982c2c6e0d0bd010433e" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.259105 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.275716 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=13.275688657 podStartE2EDuration="13.275688657s" podCreationTimestamp="2026-01-23 06:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:42:26.217404654 +0000 UTC m=+1349.449912628" watchObservedRunningTime="2026-01-23 06:42:26.275688657 +0000 UTC m=+1349.508196631" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.275816 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:42:26 crc kubenswrapper[4784]: E0123 06:42:26.276384 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad89b25-5d2b-4562-8d38-8df9e359e8a6" containerName="glance-log" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.276413 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad89b25-5d2b-4562-8d38-8df9e359e8a6" containerName="glance-log" Jan 23 06:42:26 crc kubenswrapper[4784]: E0123 06:42:26.276425 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad89b25-5d2b-4562-8d38-8df9e359e8a6" containerName="glance-httpd" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.276436 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad89b25-5d2b-4562-8d38-8df9e359e8a6" containerName="glance-httpd" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.276898 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad89b25-5d2b-4562-8d38-8df9e359e8a6" containerName="glance-log" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.276955 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad89b25-5d2b-4562-8d38-8df9e359e8a6" containerName="glance-httpd" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.278694 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.282788 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.283015 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.314584 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.476022 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3565a005-cf5e-43c0-ab31-59071dc6fb9c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.476342 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3565a005-cf5e-43c0-ab31-59071dc6fb9c-logs\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.476522 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.476632 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.476735 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24l8q\" (UniqueName: \"kubernetes.io/projected/3565a005-cf5e-43c0-ab31-59071dc6fb9c-kube-api-access-24l8q\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.476887 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.477300 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.477428 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.580812 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3565a005-cf5e-43c0-ab31-59071dc6fb9c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.580929 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3565a005-cf5e-43c0-ab31-59071dc6fb9c-logs\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.580983 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.581094 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.581146 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24l8q\" (UniqueName: \"kubernetes.io/projected/3565a005-cf5e-43c0-ab31-59071dc6fb9c-kube-api-access-24l8q\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.581251 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.581290 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.581337 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.581512 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3565a005-cf5e-43c0-ab31-59071dc6fb9c-logs\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.581676 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3565a005-cf5e-43c0-ab31-59071dc6fb9c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.582217 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.603852 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.604765 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.603607 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.605067 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.612148 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24l8q\" (UniqueName: \"kubernetes.io/projected/3565a005-cf5e-43c0-ab31-59071dc6fb9c-kube-api-access-24l8q\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.650545 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.915364 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-866b9d495-7tw9h"] Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.918170 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.923262 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.932343 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.934403 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-866b9d495-7tw9h"] Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.946441 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.992193 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-combined-ca-bundle\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.992257 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-config\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.992287 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf5rc\" (UniqueName: \"kubernetes.io/projected/cf03953b-09e0-4872-ba7a-cacf7673f1af-kube-api-access-pf5rc\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.992331 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-internal-tls-certs\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.992359 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-ovndb-tls-certs\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.992396 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-httpd-config\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:26 crc kubenswrapper[4784]: I0123 06:42:26.992502 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-public-tls-certs\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.094872 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-public-tls-certs\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.094990 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-combined-ca-bundle\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.095041 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-config\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.095067 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf5rc\" (UniqueName: \"kubernetes.io/projected/cf03953b-09e0-4872-ba7a-cacf7673f1af-kube-api-access-pf5rc\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.095127 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-internal-tls-certs\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.095168 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-ovndb-tls-certs\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.095202 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-httpd-config\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.108105 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-ovndb-tls-certs\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.117633 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-combined-ca-bundle\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.121230 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-httpd-config\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.122023 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-internal-tls-certs\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.125672 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-public-tls-certs\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.128530 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf5rc\" (UniqueName: \"kubernetes.io/projected/cf03953b-09e0-4872-ba7a-cacf7673f1af-kube-api-access-pf5rc\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.142686 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-config\") pod \"neutron-866b9d495-7tw9h\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.237231 4784 generic.go:334] "Generic (PLEG): container finished" podID="239c12d0-5821-4bcc-9b6e-b90a896731cd" containerID="94a27e3c53af418d70ce1201dea2bf867300c066d56535e95d593c77b04e5d46" exitCode=0 Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.237330 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" event={"ID":"239c12d0-5821-4bcc-9b6e-b90a896731cd","Type":"ContainerDied","Data":"94a27e3c53af418d70ce1201dea2bf867300c066d56535e95d593c77b04e5d46"} Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.251430 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c4745df56-9q499" event={"ID":"0704d33b-825f-40fb-8c88-5fbb26b6994e","Type":"ContainerStarted","Data":"fa0ccc3355232bbb89fd52681782ad95c16df66ce4cb713b92a2303a88844c67"} Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.251525 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c4745df56-9q499" event={"ID":"0704d33b-825f-40fb-8c88-5fbb26b6994e","Type":"ContainerStarted","Data":"2d0d1b7154e7737507815fbb0e58728c0238fd1976942a0f94a2fa64801d429b"} Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.251919 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.275734 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad89b25-5d2b-4562-8d38-8df9e359e8a6" path="/var/lib/kubelet/pods/4ad89b25-5d2b-4562-8d38-8df9e359e8a6/volumes" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.276862 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79d47d6854-hfx9p" event={"ID":"8d31d380-7e87-4ce6-bbfe-5f3788456978","Type":"ContainerStarted","Data":"8ceae607bf3d1a305e21df79b6d78c685530e9c5947012ef6b094625790484a4"} Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.276898 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79d47d6854-hfx9p" event={"ID":"8d31d380-7e87-4ce6-bbfe-5f3788456978","Type":"ContainerStarted","Data":"0b025da38950e35051ff144502203a873d0391f48eac9ab72a2003adfd788b87"} Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.294465 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c4745df56-9q499" podStartSLOduration=3.294441919 podStartE2EDuration="3.294441919s" podCreationTimestamp="2026-01-23 06:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:42:27.293901876 +0000 UTC m=+1350.526409860" watchObservedRunningTime="2026-01-23 06:42:27.294441919 +0000 UTC m=+1350.526949893" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.294614 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c41c5000-9962-4f05-af14-ded819d94650","Type":"ContainerStarted","Data":"de732509286404e455b0e5645795fbee959e57108fb45ef1161a8f0bee3a642a"} Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.294955 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c41c5000-9962-4f05-af14-ded819d94650" containerName="glance-httpd" containerID="cri-o://de732509286404e455b0e5645795fbee959e57108fb45ef1161a8f0bee3a642a" gracePeriod=30 Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.295080 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c41c5000-9962-4f05-af14-ded819d94650" containerName="glance-log" containerID="cri-o://ccbb4f9d09d24a456c734a4b22f3bd59cf078a7c1ac4b71d42e6f47690d77c1f" gracePeriod=30 Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.307438 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.341977 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65775dd4cd-wxtf2" event={"ID":"9a1391cd-fdf4-4770-ba43-17cb0657e117","Type":"ContainerStarted","Data":"833414c22542fc37f7d28a4634bf85d02afd8bedb4b8e5edb0234b02d05be9bd"} Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.342499 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65775dd4cd-wxtf2" event={"ID":"9a1391cd-fdf4-4770-ba43-17cb0657e117","Type":"ContainerStarted","Data":"19fae71723d41179ca450333e509a72ff36c37cd73e0508c048ad5f1868ef5d0"} Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.347126 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-79d47d6854-hfx9p" podStartSLOduration=30.638897745 podStartE2EDuration="31.347067963s" podCreationTimestamp="2026-01-23 06:41:56 +0000 UTC" firstStartedPulling="2026-01-23 06:42:24.969889018 +0000 UTC m=+1348.202396992" lastFinishedPulling="2026-01-23 06:42:25.678059236 +0000 UTC m=+1348.910567210" observedRunningTime="2026-01-23 06:42:27.340434099 +0000 UTC m=+1350.572942073" watchObservedRunningTime="2026-01-23 06:42:27.347067963 +0000 UTC m=+1350.579575957" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.532979 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=36.532945962 podStartE2EDuration="36.532945962s" podCreationTimestamp="2026-01-23 06:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:42:27.501269843 +0000 UTC m=+1350.733777837" watchObservedRunningTime="2026-01-23 06:42:27.532945962 +0000 UTC m=+1350.765453936" Jan 23 06:42:27 crc kubenswrapper[4784]: I0123 06:42:27.562902 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-65775dd4cd-wxtf2" podStartSLOduration=30.762218767 podStartE2EDuration="31.562862597s" podCreationTimestamp="2026-01-23 06:41:56 +0000 UTC" firstStartedPulling="2026-01-23 06:42:24.897598712 +0000 UTC m=+1348.130106686" lastFinishedPulling="2026-01-23 06:42:25.698242542 +0000 UTC m=+1348.930750516" observedRunningTime="2026-01-23 06:42:27.540706482 +0000 UTC m=+1350.773214476" watchObservedRunningTime="2026-01-23 06:42:27.562862597 +0000 UTC m=+1350.795370591" Jan 23 06:42:28 crc kubenswrapper[4784]: I0123 06:42:28.372450 4784 generic.go:334] "Generic (PLEG): container finished" podID="c41c5000-9962-4f05-af14-ded819d94650" containerID="de732509286404e455b0e5645795fbee959e57108fb45ef1161a8f0bee3a642a" exitCode=0 Jan 23 06:42:28 crc kubenswrapper[4784]: I0123 06:42:28.372487 4784 generic.go:334] "Generic (PLEG): container finished" podID="c41c5000-9962-4f05-af14-ded819d94650" containerID="ccbb4f9d09d24a456c734a4b22f3bd59cf078a7c1ac4b71d42e6f47690d77c1f" exitCode=143 Jan 23 06:42:28 crc kubenswrapper[4784]: I0123 06:42:28.372554 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c41c5000-9962-4f05-af14-ded819d94650","Type":"ContainerDied","Data":"de732509286404e455b0e5645795fbee959e57108fb45ef1161a8f0bee3a642a"} Jan 23 06:42:28 crc kubenswrapper[4784]: I0123 06:42:28.372601 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c41c5000-9962-4f05-af14-ded819d94650","Type":"ContainerDied","Data":"ccbb4f9d09d24a456c734a4b22f3bd59cf078a7c1ac4b71d42e6f47690d77c1f"} Jan 23 06:42:28 crc kubenswrapper[4784]: I0123 06:42:28.387678 4784 generic.go:334] "Generic (PLEG): container finished" podID="d58b6a2a-7217-4621-8e1a-c8297e74a086" containerID="e68ae9ac235abbb2055e0aa5afb1be12d5913a81f097b80e4aeedf307562d8f8" exitCode=0 Jan 23 06:42:28 crc kubenswrapper[4784]: I0123 06:42:28.388352 4784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:42:28 crc kubenswrapper[4784]: I0123 06:42:28.387787 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pzpcf" event={"ID":"d58b6a2a-7217-4621-8e1a-c8297e74a086","Type":"ContainerDied","Data":"e68ae9ac235abbb2055e0aa5afb1be12d5913a81f097b80e4aeedf307562d8f8"} Jan 23 06:42:28 crc kubenswrapper[4784]: I0123 06:42:28.978693 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 06:42:29 crc kubenswrapper[4784]: I0123 06:42:29.401973 4784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:42:29 crc kubenswrapper[4784]: I0123 06:42:29.667120 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 06:42:32 crc kubenswrapper[4784]: I0123 06:42:32.862987 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pzpcf" Jan 23 06:42:32 crc kubenswrapper[4784]: I0123 06:42:32.960733 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-config-data\") pod \"d58b6a2a-7217-4621-8e1a-c8297e74a086\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " Jan 23 06:42:32 crc kubenswrapper[4784]: I0123 06:42:32.960972 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-scripts\") pod \"d58b6a2a-7217-4621-8e1a-c8297e74a086\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " Jan 23 06:42:32 crc kubenswrapper[4784]: I0123 06:42:32.961009 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-combined-ca-bundle\") pod \"d58b6a2a-7217-4621-8e1a-c8297e74a086\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " Jan 23 06:42:32 crc kubenswrapper[4784]: I0123 06:42:32.961070 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d58b6a2a-7217-4621-8e1a-c8297e74a086-logs\") pod \"d58b6a2a-7217-4621-8e1a-c8297e74a086\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " Jan 23 06:42:32 crc kubenswrapper[4784]: I0123 06:42:32.961265 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm455\" (UniqueName: \"kubernetes.io/projected/d58b6a2a-7217-4621-8e1a-c8297e74a086-kube-api-access-dm455\") pod \"d58b6a2a-7217-4621-8e1a-c8297e74a086\" (UID: \"d58b6a2a-7217-4621-8e1a-c8297e74a086\") " Jan 23 06:42:32 crc kubenswrapper[4784]: I0123 06:42:32.963348 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d58b6a2a-7217-4621-8e1a-c8297e74a086-logs" (OuterVolumeSpecName: "logs") pod "d58b6a2a-7217-4621-8e1a-c8297e74a086" (UID: "d58b6a2a-7217-4621-8e1a-c8297e74a086"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:42:32 crc kubenswrapper[4784]: I0123 06:42:32.970124 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d58b6a2a-7217-4621-8e1a-c8297e74a086-kube-api-access-dm455" (OuterVolumeSpecName: "kube-api-access-dm455") pod "d58b6a2a-7217-4621-8e1a-c8297e74a086" (UID: "d58b6a2a-7217-4621-8e1a-c8297e74a086"). InnerVolumeSpecName "kube-api-access-dm455". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:32 crc kubenswrapper[4784]: I0123 06:42:32.971287 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-scripts" (OuterVolumeSpecName: "scripts") pod "d58b6a2a-7217-4621-8e1a-c8297e74a086" (UID: "d58b6a2a-7217-4621-8e1a-c8297e74a086"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:32 crc kubenswrapper[4784]: I0123 06:42:32.995398 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-config-data" (OuterVolumeSpecName: "config-data") pod "d58b6a2a-7217-4621-8e1a-c8297e74a086" (UID: "d58b6a2a-7217-4621-8e1a-c8297e74a086"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:33 crc kubenswrapper[4784]: I0123 06:42:33.000911 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d58b6a2a-7217-4621-8e1a-c8297e74a086" (UID: "d58b6a2a-7217-4621-8e1a-c8297e74a086"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:33 crc kubenswrapper[4784]: I0123 06:42:33.065197 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:33 crc kubenswrapper[4784]: I0123 06:42:33.065251 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:33 crc kubenswrapper[4784]: I0123 06:42:33.065264 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d58b6a2a-7217-4621-8e1a-c8297e74a086-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:33 crc kubenswrapper[4784]: I0123 06:42:33.065274 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm455\" (UniqueName: \"kubernetes.io/projected/d58b6a2a-7217-4621-8e1a-c8297e74a086-kube-api-access-dm455\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:33 crc kubenswrapper[4784]: I0123 06:42:33.065288 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d58b6a2a-7217-4621-8e1a-c8297e74a086-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:33 crc kubenswrapper[4784]: I0123 06:42:33.457111 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-pzpcf" event={"ID":"d58b6a2a-7217-4621-8e1a-c8297e74a086","Type":"ContainerDied","Data":"1cc607afce43a96145c1c5899b234668de438eb608eeb4eba653c90091892163"} Jan 23 06:42:33 crc kubenswrapper[4784]: I0123 06:42:33.457158 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cc607afce43a96145c1c5899b234668de438eb608eeb4eba653c90091892163" Jan 23 06:42:33 crc kubenswrapper[4784]: I0123 06:42:33.457216 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-pzpcf" Jan 23 06:42:33 crc kubenswrapper[4784]: I0123 06:42:33.978702 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 23 06:42:33 crc kubenswrapper[4784]: I0123 06:42:33.989307 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.129321 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-65d5f4f9bd-jjkgn"] Jan 23 06:42:34 crc kubenswrapper[4784]: E0123 06:42:34.130054 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d58b6a2a-7217-4621-8e1a-c8297e74a086" containerName="placement-db-sync" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.130080 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d58b6a2a-7217-4621-8e1a-c8297e74a086" containerName="placement-db-sync" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.130321 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="d58b6a2a-7217-4621-8e1a-c8297e74a086" containerName="placement-db-sync" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.131612 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.134607 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.135005 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gz8cs" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.135375 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.135465 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.137217 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.140853 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65d5f4f9bd-jjkgn"] Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.298791 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20502d07-c74c-4f56-9ea3-10bc8746f31b-logs\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.298895 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20502d07-c74c-4f56-9ea3-10bc8746f31b-config-data\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.298928 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20502d07-c74c-4f56-9ea3-10bc8746f31b-internal-tls-certs\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.299022 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20502d07-c74c-4f56-9ea3-10bc8746f31b-public-tls-certs\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.299066 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20502d07-c74c-4f56-9ea3-10bc8746f31b-combined-ca-bundle\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.299220 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zpqt\" (UniqueName: \"kubernetes.io/projected/20502d07-c74c-4f56-9ea3-10bc8746f31b-kube-api-access-9zpqt\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.299274 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20502d07-c74c-4f56-9ea3-10bc8746f31b-scripts\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.402210 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20502d07-c74c-4f56-9ea3-10bc8746f31b-logs\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.402364 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20502d07-c74c-4f56-9ea3-10bc8746f31b-config-data\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.402407 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20502d07-c74c-4f56-9ea3-10bc8746f31b-internal-tls-certs\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.403020 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20502d07-c74c-4f56-9ea3-10bc8746f31b-logs\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.403897 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20502d07-c74c-4f56-9ea3-10bc8746f31b-public-tls-certs\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.404005 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20502d07-c74c-4f56-9ea3-10bc8746f31b-combined-ca-bundle\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.404184 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zpqt\" (UniqueName: \"kubernetes.io/projected/20502d07-c74c-4f56-9ea3-10bc8746f31b-kube-api-access-9zpqt\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.404279 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20502d07-c74c-4f56-9ea3-10bc8746f31b-scripts\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.424617 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20502d07-c74c-4f56-9ea3-10bc8746f31b-internal-tls-certs\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.425527 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20502d07-c74c-4f56-9ea3-10bc8746f31b-scripts\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.426187 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20502d07-c74c-4f56-9ea3-10bc8746f31b-combined-ca-bundle\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.426960 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20502d07-c74c-4f56-9ea3-10bc8746f31b-config-data\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.434470 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20502d07-c74c-4f56-9ea3-10bc8746f31b-public-tls-certs\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.446963 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zpqt\" (UniqueName: \"kubernetes.io/projected/20502d07-c74c-4f56-9ea3-10bc8746f31b-kube-api-access-9zpqt\") pod \"placement-65d5f4f9bd-jjkgn\" (UID: \"20502d07-c74c-4f56-9ea3-10bc8746f31b\") " pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.458870 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.475892 4784 generic.go:334] "Generic (PLEG): container finished" podID="f9cb908c-22d4-4554-b394-68e4e32793f3" containerID="d50e7c0e88ae98b6a02097aa02bd8ed3b1d22b945cc906f3d3700e2aec4afc9f" exitCode=0 Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.476853 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j4q25" event={"ID":"f9cb908c-22d4-4554-b394-68e4e32793f3","Type":"ContainerDied","Data":"d50e7c0e88ae98b6a02097aa02bd8ed3b1d22b945cc906f3d3700e2aec4afc9f"} Jan 23 06:42:34 crc kubenswrapper[4784]: I0123 06:42:34.496224 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.070151 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.171436 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-credential-keys\") pod \"f9cb908c-22d4-4554-b394-68e4e32793f3\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.171855 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-scripts\") pod \"f9cb908c-22d4-4554-b394-68e4e32793f3\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.171984 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-fernet-keys\") pod \"f9cb908c-22d4-4554-b394-68e4e32793f3\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.172038 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-combined-ca-bundle\") pod \"f9cb908c-22d4-4554-b394-68e4e32793f3\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.172097 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm4vc\" (UniqueName: \"kubernetes.io/projected/f9cb908c-22d4-4554-b394-68e4e32793f3-kube-api-access-nm4vc\") pod \"f9cb908c-22d4-4554-b394-68e4e32793f3\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.172290 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-config-data\") pod \"f9cb908c-22d4-4554-b394-68e4e32793f3\" (UID: \"f9cb908c-22d4-4554-b394-68e4e32793f3\") " Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.200073 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-scripts" (OuterVolumeSpecName: "scripts") pod "f9cb908c-22d4-4554-b394-68e4e32793f3" (UID: "f9cb908c-22d4-4554-b394-68e4e32793f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.200173 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f9cb908c-22d4-4554-b394-68e4e32793f3" (UID: "f9cb908c-22d4-4554-b394-68e4e32793f3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.200295 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9cb908c-22d4-4554-b394-68e4e32793f3-kube-api-access-nm4vc" (OuterVolumeSpecName: "kube-api-access-nm4vc") pod "f9cb908c-22d4-4554-b394-68e4e32793f3" (UID: "f9cb908c-22d4-4554-b394-68e4e32793f3"). InnerVolumeSpecName "kube-api-access-nm4vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.200331 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f9cb908c-22d4-4554-b394-68e4e32793f3" (UID: "f9cb908c-22d4-4554-b394-68e4e32793f3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.282910 4784 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.282956 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm4vc\" (UniqueName: \"kubernetes.io/projected/f9cb908c-22d4-4554-b394-68e4e32793f3-kube-api-access-nm4vc\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.282969 4784 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.282979 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.291141 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-config-data" (OuterVolumeSpecName: "config-data") pod "f9cb908c-22d4-4554-b394-68e4e32793f3" (UID: "f9cb908c-22d4-4554-b394-68e4e32793f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.320542 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9cb908c-22d4-4554-b394-68e4e32793f3" (UID: "f9cb908c-22d4-4554-b394-68e4e32793f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.387521 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.388014 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9cb908c-22d4-4554-b394-68e4e32793f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.443877 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.443945 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.519507 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j4q25" event={"ID":"f9cb908c-22d4-4554-b394-68e4e32793f3","Type":"ContainerDied","Data":"b6c348d011c0c9272f6429b70d45029401eb6d0c48c2e001edca15cc506f900f"} Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.519564 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6c348d011c0c9272f6429b70d45029401eb6d0c48c2e001edca15cc506f900f" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.520108 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j4q25" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.546576 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.549409 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.569446 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" event={"ID":"239c12d0-5821-4bcc-9b6e-b90a896731cd","Type":"ContainerStarted","Data":"44aef79274973ca65bd96ff6ae614bde5568a481ed5cb84ef671f163d3f8de58"} Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.569830 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.668825 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" podStartSLOduration=12.668786084 podStartE2EDuration="12.668786084s" podCreationTimestamp="2026-01-23 06:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:42:36.618357364 +0000 UTC m=+1359.850865338" watchObservedRunningTime="2026-01-23 06:42:36.668786084 +0000 UTC m=+1359.901294068" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.699899 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.723290 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5959d8d8f9-nvgzc"] Jan 23 06:42:36 crc kubenswrapper[4784]: E0123 06:42:36.724177 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9cb908c-22d4-4554-b394-68e4e32793f3" containerName="keystone-bootstrap" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.725119 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9cb908c-22d4-4554-b394-68e4e32793f3" containerName="keystone-bootstrap" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.726566 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9cb908c-22d4-4554-b394-68e4e32793f3" containerName="keystone-bootstrap" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.734362 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.743468 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.743555 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.743508 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.743700 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.743822 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.744362 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2zq2z" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.772225 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5959d8d8f9-nvgzc"] Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.798413 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-credential-keys\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.798481 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-internal-tls-certs\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.798506 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-scripts\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.798528 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-fernet-keys\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.798571 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-config-data\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.798614 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-public-tls-certs\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.798640 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdrhz\" (UniqueName: \"kubernetes.io/projected/82f608f8-8c09-4f0a-b618-6a90c4d2794f-kube-api-access-xdrhz\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.798710 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-combined-ca-bundle\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.804236 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65d5f4f9bd-jjkgn"] Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.925011 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-internal-tls-certs\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.925105 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-scripts\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.925152 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-fernet-keys\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.925240 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-config-data\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.925332 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-public-tls-certs\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.925374 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdrhz\" (UniqueName: \"kubernetes.io/projected/82f608f8-8c09-4f0a-b618-6a90c4d2794f-kube-api-access-xdrhz\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.925527 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-combined-ca-bundle\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.925604 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-credential-keys\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.950365 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-combined-ca-bundle\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.955352 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-fernet-keys\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:36 crc kubenswrapper[4784]: I0123 06:42:36.972786 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdrhz\" (UniqueName: \"kubernetes.io/projected/82f608f8-8c09-4f0a-b618-6a90c4d2794f-kube-api-access-xdrhz\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.023609 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-866b9d495-7tw9h"] Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.028601 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.056690 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-scripts\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.057076 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-config-data\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.082181 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-public-tls-certs\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.082249 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-credential-keys\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.082494 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f608f8-8c09-4f0a-b618-6a90c4d2794f-internal-tls-certs\") pod \"keystone-5959d8d8f9-nvgzc\" (UID: \"82f608f8-8c09-4f0a-b618-6a90c4d2794f\") " pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.138367 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-combined-ca-bundle\") pod \"c41c5000-9962-4f05-af14-ded819d94650\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.139007 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-scripts\") pod \"c41c5000-9962-4f05-af14-ded819d94650\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.139132 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c41c5000-9962-4f05-af14-ded819d94650-httpd-run\") pod \"c41c5000-9962-4f05-af14-ded819d94650\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.139163 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-config-data\") pod \"c41c5000-9962-4f05-af14-ded819d94650\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.139197 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41c5000-9962-4f05-af14-ded819d94650-logs\") pod \"c41c5000-9962-4f05-af14-ded819d94650\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.139366 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"c41c5000-9962-4f05-af14-ded819d94650\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.139406 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q28s4\" (UniqueName: \"kubernetes.io/projected/c41c5000-9962-4f05-af14-ded819d94650-kube-api-access-q28s4\") pod \"c41c5000-9962-4f05-af14-ded819d94650\" (UID: \"c41c5000-9962-4f05-af14-ded819d94650\") " Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.140777 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c41c5000-9962-4f05-af14-ded819d94650-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c41c5000-9962-4f05-af14-ded819d94650" (UID: "c41c5000-9962-4f05-af14-ded819d94650"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.145686 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c41c5000-9962-4f05-af14-ded819d94650-logs" (OuterVolumeSpecName: "logs") pod "c41c5000-9962-4f05-af14-ded819d94650" (UID: "c41c5000-9962-4f05-af14-ded819d94650"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.149888 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-scripts" (OuterVolumeSpecName: "scripts") pod "c41c5000-9962-4f05-af14-ded819d94650" (UID: "c41c5000-9962-4f05-af14-ded819d94650"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.153911 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41c5000-9962-4f05-af14-ded819d94650-kube-api-access-q28s4" (OuterVolumeSpecName: "kube-api-access-q28s4") pod "c41c5000-9962-4f05-af14-ded819d94650" (UID: "c41c5000-9962-4f05-af14-ded819d94650"). InnerVolumeSpecName "kube-api-access-q28s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.156858 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "c41c5000-9962-4f05-af14-ded819d94650" (UID: "c41c5000-9962-4f05-af14-ded819d94650"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.184155 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c41c5000-9962-4f05-af14-ded819d94650" (UID: "c41c5000-9962-4f05-af14-ded819d94650"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.242296 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c41c5000-9962-4f05-af14-ded819d94650-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.242374 4784 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.242390 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q28s4\" (UniqueName: \"kubernetes.io/projected/c41c5000-9962-4f05-af14-ded819d94650-kube-api-access-q28s4\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.242407 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.242421 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.242432 4784 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c41c5000-9962-4f05-af14-ded819d94650-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.250920 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-config-data" (OuterVolumeSpecName: "config-data") pod "c41c5000-9962-4f05-af14-ded819d94650" (UID: "c41c5000-9962-4f05-af14-ded819d94650"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.262792 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.305416 4784 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.346863 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41c5000-9962-4f05-af14-ded819d94650-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.346906 4784 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.658177 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.659488 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c41c5000-9962-4f05-af14-ded819d94650","Type":"ContainerDied","Data":"82d4225ddc4f3c597f05c9e299fe03f8bd03e703a98ecdcd1a0ebf4ec19f8ba8"} Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.660921 4784 scope.go:117] "RemoveContainer" containerID="de732509286404e455b0e5645795fbee959e57108fb45ef1161a8f0bee3a642a" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.699475 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65d5f4f9bd-jjkgn" event={"ID":"20502d07-c74c-4f56-9ea3-10bc8746f31b","Type":"ContainerStarted","Data":"754ba3955ddce33f3b1650b5b59c62d64948b590998c5746e30d41624befa76d"} Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.716861 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8efcea72-3c4e-4458-8c0c-0e08a090b037","Type":"ContainerStarted","Data":"7fe6192d7ae7aa3ee8930b98adda10683c494470230651883cdeb1e9e5d3cd4a"} Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.759550 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=14.079957356 podStartE2EDuration="24.759509595s" podCreationTimestamp="2026-01-23 06:42:13 +0000 UTC" firstStartedPulling="2026-01-23 06:42:25.021583609 +0000 UTC m=+1348.254091583" lastFinishedPulling="2026-01-23 06:42:35.701135848 +0000 UTC m=+1358.933643822" observedRunningTime="2026-01-23 06:42:37.748630598 +0000 UTC m=+1360.981138572" watchObservedRunningTime="2026-01-23 06:42:37.759509595 +0000 UTC m=+1360.992017569" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.766160 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a78b6d3-fcc8-4cc3-a549-c0ba13460333","Type":"ContainerStarted","Data":"692ee3f328ad7a4c42001a6352a2335f90e697f9407ef9e436036b9e4a045645"} Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.770469 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-866b9d495-7tw9h" event={"ID":"cf03953b-09e0-4872-ba7a-cacf7673f1af","Type":"ContainerStarted","Data":"de88068fdf5c7551503870f1babedd16c493e4c395d7b45804683d418532131c"} Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.770633 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-866b9d495-7tw9h" event={"ID":"cf03953b-09e0-4872-ba7a-cacf7673f1af","Type":"ContainerStarted","Data":"08f825cbc34370070536549d9e9b5873ed4e0ff51c18da0071603fb823241fc4"} Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.779063 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3565a005-cf5e-43c0-ab31-59071dc6fb9c","Type":"ContainerStarted","Data":"0fbd53a084677eb9fff192e0dbd7fda4301b3b0c38333286bdbe97c2fb2038d1"} Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.800432 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"1ac1f415-cec7-4110-a87a-9a725a6bf7bb","Type":"ContainerStarted","Data":"f36f8325e9db0c305a2d342165741c014579246691fb2706b591e6c4d04062cd"} Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.816438 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g49wt" event={"ID":"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f","Type":"ContainerStarted","Data":"073776a870466ed2af0bc20d6315b03a9d062d43d0d5545bfe815974d5bd1f72"} Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.858071 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=14.210109546 podStartE2EDuration="24.858039317s" podCreationTimestamp="2026-01-23 06:42:13 +0000 UTC" firstStartedPulling="2026-01-23 06:42:25.031686537 +0000 UTC m=+1348.264194511" lastFinishedPulling="2026-01-23 06:42:35.679616308 +0000 UTC m=+1358.912124282" observedRunningTime="2026-01-23 06:42:37.839073661 +0000 UTC m=+1361.071581655" watchObservedRunningTime="2026-01-23 06:42:37.858039317 +0000 UTC m=+1361.090547291" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.888885 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-g49wt" podStartSLOduration=4.022916941 podStartE2EDuration="52.888843824s" podCreationTimestamp="2026-01-23 06:41:45 +0000 UTC" firstStartedPulling="2026-01-23 06:41:47.230973613 +0000 UTC m=+1310.463481587" lastFinishedPulling="2026-01-23 06:42:36.096900496 +0000 UTC m=+1359.329408470" observedRunningTime="2026-01-23 06:42:37.883099103 +0000 UTC m=+1361.115607077" watchObservedRunningTime="2026-01-23 06:42:37.888843824 +0000 UTC m=+1361.121351828" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.944572 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.952180 4784 scope.go:117] "RemoveContainer" containerID="ccbb4f9d09d24a456c734a4b22f3bd59cf078a7c1ac4b71d42e6f47690d77c1f" Jan 23 06:42:37 crc kubenswrapper[4784]: I0123 06:42:37.989509 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.011832 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:42:38 crc kubenswrapper[4784]: E0123 06:42:38.012896 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41c5000-9962-4f05-af14-ded819d94650" containerName="glance-log" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.012918 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41c5000-9962-4f05-af14-ded819d94650" containerName="glance-log" Jan 23 06:42:38 crc kubenswrapper[4784]: E0123 06:42:38.012964 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41c5000-9962-4f05-af14-ded819d94650" containerName="glance-httpd" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.012971 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41c5000-9962-4f05-af14-ded819d94650" containerName="glance-httpd" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.013179 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41c5000-9962-4f05-af14-ded819d94650" containerName="glance-httpd" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.013209 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41c5000-9962-4f05-af14-ded819d94650" containerName="glance-log" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.014505 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.018346 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.018671 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.038915 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.171244 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5959d8d8f9-nvgzc"] Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.179140 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.179190 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c82e190-0062-4ebc-8ee5-74401deb567e-logs\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.180483 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.181051 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-config-data\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.181104 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-scripts\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.181144 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jtm8\" (UniqueName: \"kubernetes.io/projected/7c82e190-0062-4ebc-8ee5-74401deb567e-kube-api-access-7jtm8\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.181182 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.181264 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7c82e190-0062-4ebc-8ee5-74401deb567e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: W0123 06:42:38.233380 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82f608f8_8c09_4f0a_b618_6a90c4d2794f.slice/crio-af4b00b63c8c01d9b7f6d234cd5e49493c36cc998721cb82c2965b0435952b03 WatchSource:0}: Error finding container af4b00b63c8c01d9b7f6d234cd5e49493c36cc998721cb82c2965b0435952b03: Status 404 returned error can't find the container with id af4b00b63c8c01d9b7f6d234cd5e49493c36cc998721cb82c2965b0435952b03 Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.284598 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.285229 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-config-data\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.285358 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-scripts\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.285522 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jtm8\" (UniqueName: \"kubernetes.io/projected/7c82e190-0062-4ebc-8ee5-74401deb567e-kube-api-access-7jtm8\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.285635 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.285832 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7c82e190-0062-4ebc-8ee5-74401deb567e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.286026 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.286108 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c82e190-0062-4ebc-8ee5-74401deb567e-logs\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.286709 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c82e190-0062-4ebc-8ee5-74401deb567e-logs\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.290588 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7c82e190-0062-4ebc-8ee5-74401deb567e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.291335 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.316844 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.318269 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.325581 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-config-data\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.334546 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jtm8\" (UniqueName: \"kubernetes.io/projected/7c82e190-0062-4ebc-8ee5-74401deb567e-kube-api-access-7jtm8\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.334785 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-scripts\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.563094 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.786210 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.881036 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tvpzc" event={"ID":"e52f206e-7230-4c60-a8c2-ad6cebabc434","Type":"ContainerStarted","Data":"c692803d50a9ab6d420bbd22e6b0cd4a2e3e2c1935e9a7fdef361916b215416c"} Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.884963 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5959d8d8f9-nvgzc" event={"ID":"82f608f8-8c09-4f0a-b618-6a90c4d2794f","Type":"ContainerStarted","Data":"af4b00b63c8c01d9b7f6d234cd5e49493c36cc998721cb82c2965b0435952b03"} Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.891594 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65d5f4f9bd-jjkgn" event={"ID":"20502d07-c74c-4f56-9ea3-10bc8746f31b","Type":"ContainerStarted","Data":"2c8bd5216cb641e61c93c5ccbc3ee1803ff42eb61580ec0295da9f12a10b3772"} Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.902193 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.903028 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="7f4508f4-6ead-496d-8449-fe100d604c5b" containerName="watcher-api-log" containerID="cri-o://ff3f52ccf574c9ce433073d22a3669808e3eae1d89486bd36e4c8f65e174add6" gracePeriod=30 Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.903404 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="7f4508f4-6ead-496d-8449-fe100d604c5b" containerName="watcher-api" containerID="cri-o://d293815799e9b399b38036a0f17bcc01c4febc648b0dba368a341772d3552ad4" gracePeriod=30 Jan 23 06:42:38 crc kubenswrapper[4784]: I0123 06:42:38.950107 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-tvpzc" podStartSLOduration=5.08140227 podStartE2EDuration="53.950082301s" podCreationTimestamp="2026-01-23 06:41:45 +0000 UTC" firstStartedPulling="2026-01-23 06:41:47.1194091 +0000 UTC m=+1310.351917064" lastFinishedPulling="2026-01-23 06:42:35.988089121 +0000 UTC m=+1359.220597095" observedRunningTime="2026-01-23 06:42:38.94355786 +0000 UTC m=+1362.176065834" watchObservedRunningTime="2026-01-23 06:42:38.950082301 +0000 UTC m=+1362.182590275" Jan 23 06:42:39 crc kubenswrapper[4784]: I0123 06:42:39.034427 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="7f4508f4-6ead-496d-8449-fe100d604c5b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.163:9322/\": EOF" Jan 23 06:42:39 crc kubenswrapper[4784]: I0123 06:42:39.058070 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 23 06:42:39 crc kubenswrapper[4784]: I0123 06:42:39.315932 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41c5000-9962-4f05-af14-ded819d94650" path="/var/lib/kubelet/pods/c41c5000-9962-4f05-af14-ded819d94650/volumes" Jan 23 06:42:39 crc kubenswrapper[4784]: I0123 06:42:39.753599 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:42:39 crc kubenswrapper[4784]: W0123 06:42:39.778020 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c82e190_0062_4ebc_8ee5_74401deb567e.slice/crio-1b79d7abcb7ef275a45e014daede5ee75a8d4a43723e49c2516022762a729924 WatchSource:0}: Error finding container 1b79d7abcb7ef275a45e014daede5ee75a8d4a43723e49c2516022762a729924: Status 404 returned error can't find the container with id 1b79d7abcb7ef275a45e014daede5ee75a8d4a43723e49c2516022762a729924 Jan 23 06:42:39 crc kubenswrapper[4784]: I0123 06:42:39.940682 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3565a005-cf5e-43c0-ab31-59071dc6fb9c","Type":"ContainerStarted","Data":"0113bed312592c480c7f7b5dc6a9466b8552bc8d763951f25f640709e4cf3757"} Jan 23 06:42:39 crc kubenswrapper[4784]: I0123 06:42:39.964081 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5959d8d8f9-nvgzc" event={"ID":"82f608f8-8c09-4f0a-b618-6a90c4d2794f","Type":"ContainerStarted","Data":"8e0ef589340193f97122c7d7090fa9d22f15061bf381d55bcda69bafb38123de"} Jan 23 06:42:39 crc kubenswrapper[4784]: I0123 06:42:39.965686 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:42:39 crc kubenswrapper[4784]: I0123 06:42:39.977218 4784 generic.go:334] "Generic (PLEG): container finished" podID="7f4508f4-6ead-496d-8449-fe100d604c5b" containerID="ff3f52ccf574c9ce433073d22a3669808e3eae1d89486bd36e4c8f65e174add6" exitCode=143 Jan 23 06:42:39 crc kubenswrapper[4784]: I0123 06:42:39.977340 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7f4508f4-6ead-496d-8449-fe100d604c5b","Type":"ContainerDied","Data":"ff3f52ccf574c9ce433073d22a3669808e3eae1d89486bd36e4c8f65e174add6"} Jan 23 06:42:39 crc kubenswrapper[4784]: I0123 06:42:39.993152 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c82e190-0062-4ebc-8ee5-74401deb567e","Type":"ContainerStarted","Data":"1b79d7abcb7ef275a45e014daede5ee75a8d4a43723e49c2516022762a729924"} Jan 23 06:42:39 crc kubenswrapper[4784]: I0123 06:42:39.997028 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5959d8d8f9-nvgzc" podStartSLOduration=3.996997853 podStartE2EDuration="3.996997853s" podCreationTimestamp="2026-01-23 06:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:42:39.993467916 +0000 UTC m=+1363.225975900" watchObservedRunningTime="2026-01-23 06:42:39.996997853 +0000 UTC m=+1363.229505827" Jan 23 06:42:40 crc kubenswrapper[4784]: I0123 06:42:40.043290 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-866b9d495-7tw9h" event={"ID":"cf03953b-09e0-4872-ba7a-cacf7673f1af","Type":"ContainerStarted","Data":"6debc11b551b0e401c6fe0c115d122bbef9f8cddf7388e614fcd2cb0dd28f06e"} Jan 23 06:42:40 crc kubenswrapper[4784]: I0123 06:42:40.043612 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:42:40 crc kubenswrapper[4784]: I0123 06:42:40.085041 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-866b9d495-7tw9h" podStartSLOduration=14.08501398 podStartE2EDuration="14.08501398s" podCreationTimestamp="2026-01-23 06:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:42:40.08060042 +0000 UTC m=+1363.313108404" watchObservedRunningTime="2026-01-23 06:42:40.08501398 +0000 UTC m=+1363.317521954" Jan 23 06:42:41 crc kubenswrapper[4784]: I0123 06:42:41.060964 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c82e190-0062-4ebc-8ee5-74401deb567e","Type":"ContainerStarted","Data":"8669082937e9493abab428f1b051a2f4bb1da7db547817840b6950cf8b812fda"} Jan 23 06:42:41 crc kubenswrapper[4784]: I0123 06:42:41.065545 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3565a005-cf5e-43c0-ab31-59071dc6fb9c","Type":"ContainerStarted","Data":"f7234ac9ecffe753b48da6d5bfd59256741840b45b7b8c635986505e2bff1cbf"} Jan 23 06:42:41 crc kubenswrapper[4784]: I0123 06:42:41.071385 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65d5f4f9bd-jjkgn" event={"ID":"20502d07-c74c-4f56-9ea3-10bc8746f31b","Type":"ContainerStarted","Data":"5fa1af88e99b19135c51ba67812e7c467531db9261a99fa820a1be70ef69a230"} Jan 23 06:42:41 crc kubenswrapper[4784]: I0123 06:42:41.071785 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:41 crc kubenswrapper[4784]: I0123 06:42:41.071858 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:42:41 crc kubenswrapper[4784]: I0123 06:42:41.129956 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=15.129927573 podStartE2EDuration="15.129927573s" podCreationTimestamp="2026-01-23 06:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:42:41.127286888 +0000 UTC m=+1364.359794892" watchObservedRunningTime="2026-01-23 06:42:41.129927573 +0000 UTC m=+1364.362435547" Jan 23 06:42:41 crc kubenswrapper[4784]: I0123 06:42:41.166713 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-65d5f4f9bd-jjkgn" podStartSLOduration=7.166687498 podStartE2EDuration="7.166687498s" podCreationTimestamp="2026-01-23 06:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:42:41.161002538 +0000 UTC m=+1364.393510532" watchObservedRunningTime="2026-01-23 06:42:41.166687498 +0000 UTC m=+1364.399195472" Jan 23 06:42:43 crc kubenswrapper[4784]: I0123 06:42:43.113148 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c82e190-0062-4ebc-8ee5-74401deb567e","Type":"ContainerStarted","Data":"ec5323f48b812b692dfaabd80a389c2a19eeada8d9f97d51239b731c537cbc7c"} Jan 23 06:42:43 crc kubenswrapper[4784]: I0123 06:42:43.169051 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.16902859 podStartE2EDuration="6.16902859s" podCreationTimestamp="2026-01-23 06:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:42:43.168323453 +0000 UTC m=+1366.400831427" watchObservedRunningTime="2026-01-23 06:42:43.16902859 +0000 UTC m=+1366.401536564" Jan 23 06:42:43 crc kubenswrapper[4784]: I0123 06:42:43.851685 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 06:42:43 crc kubenswrapper[4784]: I0123 06:42:43.852084 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 23 06:42:43 crc kubenswrapper[4784]: I0123 06:42:43.901632 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 23 06:42:44 crc kubenswrapper[4784]: I0123 06:42:44.058429 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 23 06:42:44 crc kubenswrapper[4784]: I0123 06:42:44.094272 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 23 06:42:44 crc kubenswrapper[4784]: I0123 06:42:44.129648 4784 generic.go:334] "Generic (PLEG): container finished" podID="7f4508f4-6ead-496d-8449-fe100d604c5b" containerID="d293815799e9b399b38036a0f17bcc01c4febc648b0dba368a341772d3552ad4" exitCode=0 Jan 23 06:42:44 crc kubenswrapper[4784]: I0123 06:42:44.129865 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7f4508f4-6ead-496d-8449-fe100d604c5b","Type":"ContainerDied","Data":"d293815799e9b399b38036a0f17bcc01c4febc648b0dba368a341772d3552ad4"} Jan 23 06:42:44 crc kubenswrapper[4784]: I0123 06:42:44.164919 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 23 06:42:44 crc kubenswrapper[4784]: I0123 06:42:44.167653 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 23 06:42:44 crc kubenswrapper[4784]: I0123 06:42:44.545004 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:42:44 crc kubenswrapper[4784]: I0123 06:42:44.696540 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qmblm"] Jan 23 06:42:44 crc kubenswrapper[4784]: I0123 06:42:44.700180 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" podUID="d7d605a9-5002-443e-b7d3-8e8cb7922d10" containerName="dnsmasq-dns" containerID="cri-o://3d4f4bb952328d1f23873c0cd87f9f9d3819f53848d99269277b7e75b003b372" gracePeriod=10 Jan 23 06:42:45 crc kubenswrapper[4784]: I0123 06:42:45.150867 4784 generic.go:334] "Generic (PLEG): container finished" podID="d192b60c-bc41-4f7d-9c61-2748ad0f8a7f" containerID="073776a870466ed2af0bc20d6315b03a9d062d43d0d5545bfe815974d5bd1f72" exitCode=0 Jan 23 06:42:45 crc kubenswrapper[4784]: I0123 06:42:45.150968 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g49wt" event={"ID":"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f","Type":"ContainerDied","Data":"073776a870466ed2af0bc20d6315b03a9d062d43d0d5545bfe815974d5bd1f72"} Jan 23 06:42:45 crc kubenswrapper[4784]: I0123 06:42:45.159427 4784 generic.go:334] "Generic (PLEG): container finished" podID="d7d605a9-5002-443e-b7d3-8e8cb7922d10" containerID="3d4f4bb952328d1f23873c0cd87f9f9d3819f53848d99269277b7e75b003b372" exitCode=0 Jan 23 06:42:45 crc kubenswrapper[4784]: I0123 06:42:45.159873 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" event={"ID":"d7d605a9-5002-443e-b7d3-8e8cb7922d10","Type":"ContainerDied","Data":"3d4f4bb952328d1f23873c0cd87f9f9d3819f53848d99269277b7e75b003b372"} Jan 23 06:42:46 crc kubenswrapper[4784]: I0123 06:42:46.448338 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-79d47d6854-hfx9p" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 23 06:42:46 crc kubenswrapper[4784]: I0123 06:42:46.551591 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-65775dd4cd-wxtf2" podUID="9a1391cd-fdf4-4770-ba43-17cb0657e117" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 23 06:42:46 crc kubenswrapper[4784]: I0123 06:42:46.947643 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 06:42:46 crc kubenswrapper[4784]: I0123 06:42:46.948676 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 06:42:46 crc kubenswrapper[4784]: I0123 06:42:46.985218 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 06:42:47 crc kubenswrapper[4784]: I0123 06:42:47.014077 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 06:42:47 crc kubenswrapper[4784]: I0123 06:42:47.149618 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" podUID="d7d605a9-5002-443e-b7d3-8e8cb7922d10" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.156:5353: connect: connection refused" Jan 23 06:42:47 crc kubenswrapper[4784]: I0123 06:42:47.190363 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 06:42:47 crc kubenswrapper[4784]: I0123 06:42:47.190431 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 06:42:48 crc kubenswrapper[4784]: I0123 06:42:48.787049 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 06:42:48 crc kubenswrapper[4784]: I0123 06:42:48.787781 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 06:42:48 crc kubenswrapper[4784]: I0123 06:42:48.822549 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 06:42:48 crc kubenswrapper[4784]: I0123 06:42:48.853333 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 06:42:48 crc kubenswrapper[4784]: I0123 06:42:48.980543 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="7f4508f4-6ead-496d-8449-fe100d604c5b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.163:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:42:48 crc kubenswrapper[4784]: I0123 06:42:48.980658 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="7f4508f4-6ead-496d-8449-fe100d604c5b" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:42:49 crc kubenswrapper[4784]: I0123 06:42:49.213721 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 06:42:49 crc kubenswrapper[4784]: I0123 06:42:49.213848 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 06:42:49 crc kubenswrapper[4784]: I0123 06:42:49.920463 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:42:49 crc kubenswrapper[4784]: I0123 06:42:49.937530 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g49wt" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.000054 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.000273 4784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.058774 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-custom-prometheus-ca\") pod \"7f4508f4-6ead-496d-8449-fe100d604c5b\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.058840 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f4508f4-6ead-496d-8449-fe100d604c5b-logs\") pod \"7f4508f4-6ead-496d-8449-fe100d604c5b\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.058914 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-config-data\") pod \"7f4508f4-6ead-496d-8449-fe100d604c5b\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.059058 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqmjx\" (UniqueName: \"kubernetes.io/projected/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-kube-api-access-dqmjx\") pod \"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f\" (UID: \"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f\") " Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.059188 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-combined-ca-bundle\") pod \"7f4508f4-6ead-496d-8449-fe100d604c5b\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.059273 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-combined-ca-bundle\") pod \"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f\" (UID: \"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f\") " Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.059307 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj85m\" (UniqueName: \"kubernetes.io/projected/7f4508f4-6ead-496d-8449-fe100d604c5b-kube-api-access-rj85m\") pod \"7f4508f4-6ead-496d-8449-fe100d604c5b\" (UID: \"7f4508f4-6ead-496d-8449-fe100d604c5b\") " Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.059357 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-db-sync-config-data\") pod \"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f\" (UID: \"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f\") " Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.059655 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f4508f4-6ead-496d-8449-fe100d604c5b-logs" (OuterVolumeSpecName: "logs") pod "7f4508f4-6ead-496d-8449-fe100d604c5b" (UID: "7f4508f4-6ead-496d-8449-fe100d604c5b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.059973 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f4508f4-6ead-496d-8449-fe100d604c5b-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.075218 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4508f4-6ead-496d-8449-fe100d604c5b-kube-api-access-rj85m" (OuterVolumeSpecName: "kube-api-access-rj85m") pod "7f4508f4-6ead-496d-8449-fe100d604c5b" (UID: "7f4508f4-6ead-496d-8449-fe100d604c5b"). InnerVolumeSpecName "kube-api-access-rj85m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.083066 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-kube-api-access-dqmjx" (OuterVolumeSpecName: "kube-api-access-dqmjx") pod "d192b60c-bc41-4f7d-9c61-2748ad0f8a7f" (UID: "d192b60c-bc41-4f7d-9c61-2748ad0f8a7f"). InnerVolumeSpecName "kube-api-access-dqmjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.091200 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.109676 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f4508f4-6ead-496d-8449-fe100d604c5b" (UID: "7f4508f4-6ead-496d-8449-fe100d604c5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.116180 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d192b60c-bc41-4f7d-9c61-2748ad0f8a7f" (UID: "d192b60c-bc41-4f7d-9c61-2748ad0f8a7f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.138069 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7f4508f4-6ead-496d-8449-fe100d604c5b" (UID: "7f4508f4-6ead-496d-8449-fe100d604c5b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.165150 4784 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.165207 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqmjx\" (UniqueName: \"kubernetes.io/projected/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-kube-api-access-dqmjx\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.165222 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.165382 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj85m\" (UniqueName: \"kubernetes.io/projected/7f4508f4-6ead-496d-8449-fe100d604c5b-kube-api-access-rj85m\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.165396 4784 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.195369 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d192b60c-bc41-4f7d-9c61-2748ad0f8a7f" (UID: "d192b60c-bc41-4f7d-9c61-2748ad0f8a7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.197258 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-config-data" (OuterVolumeSpecName: "config-data") pod "7f4508f4-6ead-496d-8449-fe100d604c5b" (UID: "7f4508f4-6ead-496d-8449-fe100d604c5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.239580 4784 generic.go:334] "Generic (PLEG): container finished" podID="e52f206e-7230-4c60-a8c2-ad6cebabc434" containerID="c692803d50a9ab6d420bbd22e6b0cd4a2e3e2c1935e9a7fdef361916b215416c" exitCode=0 Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.242148 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tvpzc" event={"ID":"e52f206e-7230-4c60-a8c2-ad6cebabc434","Type":"ContainerDied","Data":"c692803d50a9ab6d420bbd22e6b0cd4a2e3e2c1935e9a7fdef361916b215416c"} Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.256562 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-g49wt" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.256925 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-g49wt" event={"ID":"d192b60c-bc41-4f7d-9c61-2748ad0f8a7f","Type":"ContainerDied","Data":"190c1f011387c1f8c0912399a10f530dda5739704cdc0665b3cccfb9e469d462"} Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.257011 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="190c1f011387c1f8c0912399a10f530dda5739704cdc0665b3cccfb9e469d462" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.270312 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.270353 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4508f4-6ead-496d-8449-fe100d604c5b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.273740 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.277017 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7f4508f4-6ead-496d-8449-fe100d604c5b","Type":"ContainerDied","Data":"283b7455fa8e9a7c501aefbbd50131af6e2c0035e63ea067a8143f765d3308f2"} Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.277135 4784 scope.go:117] "RemoveContainer" containerID="d293815799e9b399b38036a0f17bcc01c4febc648b0dba368a341772d3552ad4" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.342487 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.350345 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.385743 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:42:50 crc kubenswrapper[4784]: E0123 06:42:50.386299 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d192b60c-bc41-4f7d-9c61-2748ad0f8a7f" containerName="barbican-db-sync" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.386324 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d192b60c-bc41-4f7d-9c61-2748ad0f8a7f" containerName="barbican-db-sync" Jan 23 06:42:50 crc kubenswrapper[4784]: E0123 06:42:50.386340 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4508f4-6ead-496d-8449-fe100d604c5b" containerName="watcher-api" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.386346 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4508f4-6ead-496d-8449-fe100d604c5b" containerName="watcher-api" Jan 23 06:42:50 crc kubenswrapper[4784]: E0123 06:42:50.386385 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4508f4-6ead-496d-8449-fe100d604c5b" containerName="watcher-api-log" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.386392 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4508f4-6ead-496d-8449-fe100d604c5b" containerName="watcher-api-log" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.386598 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="d192b60c-bc41-4f7d-9c61-2748ad0f8a7f" containerName="barbican-db-sync" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.386612 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4508f4-6ead-496d-8449-fe100d604c5b" containerName="watcher-api-log" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.386636 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4508f4-6ead-496d-8449-fe100d604c5b" containerName="watcher-api" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.388134 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.397574 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.399587 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.399998 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.400303 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.579115 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/727a8885-767d-45ab-a5d7-52a44e0d3823-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.579250 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc2wr\" (UniqueName: \"kubernetes.io/projected/727a8885-767d-45ab-a5d7-52a44e0d3823-kube-api-access-jc2wr\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.579370 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727a8885-767d-45ab-a5d7-52a44e0d3823-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.579417 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/727a8885-767d-45ab-a5d7-52a44e0d3823-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.579442 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/727a8885-767d-45ab-a5d7-52a44e0d3823-public-tls-certs\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.579591 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727a8885-767d-45ab-a5d7-52a44e0d3823-config-data\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.580149 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/727a8885-767d-45ab-a5d7-52a44e0d3823-logs\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.684638 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/727a8885-767d-45ab-a5d7-52a44e0d3823-logs\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.684884 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/727a8885-767d-45ab-a5d7-52a44e0d3823-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.685228 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc2wr\" (UniqueName: \"kubernetes.io/projected/727a8885-767d-45ab-a5d7-52a44e0d3823-kube-api-access-jc2wr\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.685299 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727a8885-767d-45ab-a5d7-52a44e0d3823-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.685358 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/727a8885-767d-45ab-a5d7-52a44e0d3823-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.685432 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/727a8885-767d-45ab-a5d7-52a44e0d3823-public-tls-certs\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.685483 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727a8885-767d-45ab-a5d7-52a44e0d3823-config-data\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.690290 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/727a8885-767d-45ab-a5d7-52a44e0d3823-logs\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.696565 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727a8885-767d-45ab-a5d7-52a44e0d3823-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.696584 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/727a8885-767d-45ab-a5d7-52a44e0d3823-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.704094 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727a8885-767d-45ab-a5d7-52a44e0d3823-config-data\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.705732 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/727a8885-767d-45ab-a5d7-52a44e0d3823-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.707531 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/727a8885-767d-45ab-a5d7-52a44e0d3823-public-tls-certs\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.711951 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc2wr\" (UniqueName: \"kubernetes.io/projected/727a8885-767d-45ab-a5d7-52a44e0d3823-kube-api-access-jc2wr\") pod \"watcher-api-0\" (UID: \"727a8885-767d-45ab-a5d7-52a44e0d3823\") " pod="openstack/watcher-api-0" Jan 23 06:42:50 crc kubenswrapper[4784]: I0123 06:42:50.942991 4784 scope.go:117] "RemoveContainer" containerID="ff3f52ccf574c9ce433073d22a3669808e3eae1d89486bd36e4c8f65e174add6" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.010115 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.390358 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f4508f4-6ead-496d-8449-fe100d604c5b" path="/var/lib/kubelet/pods/7f4508f4-6ead-496d-8449-fe100d604c5b/volumes" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.406106 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5fd9757bf9-7tmmd"] Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.460152 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.502022 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-64f9677dc8-64nzl"] Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.504217 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.510649 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.510822 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.511060 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kr9c9" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.511164 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.516519 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fd9757bf9-7tmmd"] Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.538260 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-64f9677dc8-64nzl"] Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.544362 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wk74n"] Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.545397 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.550674 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b51980-afe8-4434-b345-022a3be8f449-config-data\") pod \"barbican-worker-5fd9757bf9-7tmmd\" (UID: \"06b51980-afe8-4434-b345-022a3be8f449\") " pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.550981 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06b51980-afe8-4434-b345-022a3be8f449-logs\") pod \"barbican-worker-5fd9757bf9-7tmmd\" (UID: \"06b51980-afe8-4434-b345-022a3be8f449\") " pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.551170 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11cf409f-f9ae-4e80-87db-66495679cf86-config-data-custom\") pod \"barbican-keystone-listener-64f9677dc8-64nzl\" (UID: \"11cf409f-f9ae-4e80-87db-66495679cf86\") " pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.551289 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtjk5\" (UniqueName: \"kubernetes.io/projected/06b51980-afe8-4434-b345-022a3be8f449-kube-api-access-dtjk5\") pod \"barbican-worker-5fd9757bf9-7tmmd\" (UID: \"06b51980-afe8-4434-b345-022a3be8f449\") " pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.551396 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06b51980-afe8-4434-b345-022a3be8f449-config-data-custom\") pod \"barbican-worker-5fd9757bf9-7tmmd\" (UID: \"06b51980-afe8-4434-b345-022a3be8f449\") " pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.551569 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11cf409f-f9ae-4e80-87db-66495679cf86-logs\") pod \"barbican-keystone-listener-64f9677dc8-64nzl\" (UID: \"11cf409f-f9ae-4e80-87db-66495679cf86\") " pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.551683 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2nlb\" (UniqueName: \"kubernetes.io/projected/11cf409f-f9ae-4e80-87db-66495679cf86-kube-api-access-d2nlb\") pod \"barbican-keystone-listener-64f9677dc8-64nzl\" (UID: \"11cf409f-f9ae-4e80-87db-66495679cf86\") " pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.551829 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cf409f-f9ae-4e80-87db-66495679cf86-config-data\") pod \"barbican-keystone-listener-64f9677dc8-64nzl\" (UID: \"11cf409f-f9ae-4e80-87db-66495679cf86\") " pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.551935 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b51980-afe8-4434-b345-022a3be8f449-combined-ca-bundle\") pod \"barbican-worker-5fd9757bf9-7tmmd\" (UID: \"06b51980-afe8-4434-b345-022a3be8f449\") " pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.552106 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cf409f-f9ae-4e80-87db-66495679cf86-combined-ca-bundle\") pod \"barbican-keystone-listener-64f9677dc8-64nzl\" (UID: \"11cf409f-f9ae-4e80-87db-66495679cf86\") " pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: E0123 06:42:51.554672 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d605a9-5002-443e-b7d3-8e8cb7922d10" containerName="init" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.554822 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d605a9-5002-443e-b7d3-8e8cb7922d10" containerName="init" Jan 23 06:42:51 crc kubenswrapper[4784]: E0123 06:42:51.554916 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7d605a9-5002-443e-b7d3-8e8cb7922d10" containerName="dnsmasq-dns" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.554985 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7d605a9-5002-443e-b7d3-8e8cb7922d10" containerName="dnsmasq-dns" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.555374 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7d605a9-5002-443e-b7d3-8e8cb7922d10" containerName="dnsmasq-dns" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.558133 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.655223 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2nlb\" (UniqueName: \"kubernetes.io/projected/11cf409f-f9ae-4e80-87db-66495679cf86-kube-api-access-d2nlb\") pod \"barbican-keystone-listener-64f9677dc8-64nzl\" (UID: \"11cf409f-f9ae-4e80-87db-66495679cf86\") " pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.689244 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cf409f-f9ae-4e80-87db-66495679cf86-config-data\") pod \"barbican-keystone-listener-64f9677dc8-64nzl\" (UID: \"11cf409f-f9ae-4e80-87db-66495679cf86\") " pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.689339 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b51980-afe8-4434-b345-022a3be8f449-combined-ca-bundle\") pod \"barbican-worker-5fd9757bf9-7tmmd\" (UID: \"06b51980-afe8-4434-b345-022a3be8f449\") " pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.689423 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cf409f-f9ae-4e80-87db-66495679cf86-combined-ca-bundle\") pod \"barbican-keystone-listener-64f9677dc8-64nzl\" (UID: \"11cf409f-f9ae-4e80-87db-66495679cf86\") " pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.689679 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b51980-afe8-4434-b345-022a3be8f449-config-data\") pod \"barbican-worker-5fd9757bf9-7tmmd\" (UID: \"06b51980-afe8-4434-b345-022a3be8f449\") " pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.689740 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06b51980-afe8-4434-b345-022a3be8f449-logs\") pod \"barbican-worker-5fd9757bf9-7tmmd\" (UID: \"06b51980-afe8-4434-b345-022a3be8f449\") " pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.690108 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11cf409f-f9ae-4e80-87db-66495679cf86-config-data-custom\") pod \"barbican-keystone-listener-64f9677dc8-64nzl\" (UID: \"11cf409f-f9ae-4e80-87db-66495679cf86\") " pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.690172 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtjk5\" (UniqueName: \"kubernetes.io/projected/06b51980-afe8-4434-b345-022a3be8f449-kube-api-access-dtjk5\") pod \"barbican-worker-5fd9757bf9-7tmmd\" (UID: \"06b51980-afe8-4434-b345-022a3be8f449\") " pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.690259 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06b51980-afe8-4434-b345-022a3be8f449-config-data-custom\") pod \"barbican-worker-5fd9757bf9-7tmmd\" (UID: \"06b51980-afe8-4434-b345-022a3be8f449\") " pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.690408 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11cf409f-f9ae-4e80-87db-66495679cf86-logs\") pod \"barbican-keystone-listener-64f9677dc8-64nzl\" (UID: \"11cf409f-f9ae-4e80-87db-66495679cf86\") " pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.691502 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11cf409f-f9ae-4e80-87db-66495679cf86-logs\") pod \"barbican-keystone-listener-64f9677dc8-64nzl\" (UID: \"11cf409f-f9ae-4e80-87db-66495679cf86\") " pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.656611 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wk74n"] Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.698851 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06b51980-afe8-4434-b345-022a3be8f449-logs\") pod \"barbican-worker-5fd9757bf9-7tmmd\" (UID: \"06b51980-afe8-4434-b345-022a3be8f449\") " pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.792851 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-ovsdbserver-sb\") pod \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.793007 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mspdp\" (UniqueName: \"kubernetes.io/projected/d7d605a9-5002-443e-b7d3-8e8cb7922d10-kube-api-access-mspdp\") pod \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.793078 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-config\") pod \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.793230 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-ovsdbserver-nb\") pod \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.793281 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-dns-swift-storage-0\") pod \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.793302 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-dns-svc\") pod \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\" (UID: \"d7d605a9-5002-443e-b7d3-8e8cb7922d10\") " Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.793699 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-dns-svc\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.793767 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.793858 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpqtx\" (UniqueName: \"kubernetes.io/projected/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-kube-api-access-tpqtx\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.793911 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-config\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.793948 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.793966 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.854563 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06b51980-afe8-4434-b345-022a3be8f449-combined-ca-bundle\") pod \"barbican-worker-5fd9757bf9-7tmmd\" (UID: \"06b51980-afe8-4434-b345-022a3be8f449\") " pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.858249 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06b51980-afe8-4434-b345-022a3be8f449-config-data\") pod \"barbican-worker-5fd9757bf9-7tmmd\" (UID: \"06b51980-afe8-4434-b345-022a3be8f449\") " pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.858817 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11cf409f-f9ae-4e80-87db-66495679cf86-combined-ca-bundle\") pod \"barbican-keystone-listener-64f9677dc8-64nzl\" (UID: \"11cf409f-f9ae-4e80-87db-66495679cf86\") " pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.859367 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11cf409f-f9ae-4e80-87db-66495679cf86-config-data-custom\") pod \"barbican-keystone-listener-64f9677dc8-64nzl\" (UID: \"11cf409f-f9ae-4e80-87db-66495679cf86\") " pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.859986 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06b51980-afe8-4434-b345-022a3be8f449-config-data-custom\") pod \"barbican-worker-5fd9757bf9-7tmmd\" (UID: \"06b51980-afe8-4434-b345-022a3be8f449\") " pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.871626 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2nlb\" (UniqueName: \"kubernetes.io/projected/11cf409f-f9ae-4e80-87db-66495679cf86-kube-api-access-d2nlb\") pod \"barbican-keystone-listener-64f9677dc8-64nzl\" (UID: \"11cf409f-f9ae-4e80-87db-66495679cf86\") " pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.877567 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtjk5\" (UniqueName: \"kubernetes.io/projected/06b51980-afe8-4434-b345-022a3be8f449-kube-api-access-dtjk5\") pod \"barbican-worker-5fd9757bf9-7tmmd\" (UID: \"06b51980-afe8-4434-b345-022a3be8f449\") " pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.878220 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7d605a9-5002-443e-b7d3-8e8cb7922d10-kube-api-access-mspdp" (OuterVolumeSpecName: "kube-api-access-mspdp") pod "d7d605a9-5002-443e-b7d3-8e8cb7922d10" (UID: "d7d605a9-5002-443e-b7d3-8e8cb7922d10"). InnerVolumeSpecName "kube-api-access-mspdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.916533 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpqtx\" (UniqueName: \"kubernetes.io/projected/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-kube-api-access-tpqtx\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.917016 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-config\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.917057 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.917073 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.917125 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-dns-svc\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.917175 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.917251 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mspdp\" (UniqueName: \"kubernetes.io/projected/d7d605a9-5002-443e-b7d3-8e8cb7922d10-kube-api-access-mspdp\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.918429 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.919206 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.924527 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-config\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.926001 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11cf409f-f9ae-4e80-87db-66495679cf86-config-data\") pod \"barbican-keystone-listener-64f9677dc8-64nzl\" (UID: \"11cf409f-f9ae-4e80-87db-66495679cf86\") " pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.930163 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-dns-svc\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.942276 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.989705 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7d605a9-5002-443e-b7d3-8e8cb7922d10" (UID: "d7d605a9-5002-443e-b7d3-8e8cb7922d10"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:51 crc kubenswrapper[4784]: I0123 06:42:51.989764 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-config" (OuterVolumeSpecName: "config") pod "d7d605a9-5002-443e-b7d3-8e8cb7922d10" (UID: "d7d605a9-5002-443e-b7d3-8e8cb7922d10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.019125 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpqtx\" (UniqueName: \"kubernetes.io/projected/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-kube-api-access-tpqtx\") pod \"dnsmasq-dns-85ff748b95-wk74n\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.019585 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.019623 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.046276 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-55f9fccfc8-b52jv"] Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.052459 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.060117 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.062423 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d7d605a9-5002-443e-b7d3-8e8cb7922d10" (UID: "d7d605a9-5002-443e-b7d3-8e8cb7922d10"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.086016 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55f9fccfc8-b52jv"] Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.124166 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.160604 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d7d605a9-5002-443e-b7d3-8e8cb7922d10" (UID: "d7d605a9-5002-443e-b7d3-8e8cb7922d10"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.178770 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d7d605a9-5002-443e-b7d3-8e8cb7922d10" (UID: "d7d605a9-5002-443e-b7d3-8e8cb7922d10"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.185370 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd9757bf9-7tmmd" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.297419 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-config-data\") pod \"barbican-api-55f9fccfc8-b52jv\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.297535 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a51a41e9-6984-493a-b3af-ecee435cc80f-logs\") pod \"barbican-api-55f9fccfc8-b52jv\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.297595 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-config-data-custom\") pod \"barbican-api-55f9fccfc8-b52jv\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.297648 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s8vt\" (UniqueName: \"kubernetes.io/projected/a51a41e9-6984-493a-b3af-ecee435cc80f-kube-api-access-4s8vt\") pod \"barbican-api-55f9fccfc8-b52jv\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.297708 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-combined-ca-bundle\") pod \"barbican-api-55f9fccfc8-b52jv\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.298407 4784 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.298468 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7d605a9-5002-443e-b7d3-8e8cb7922d10-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.314206 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.338671 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.401010 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a51a41e9-6984-493a-b3af-ecee435cc80f-logs\") pod \"barbican-api-55f9fccfc8-b52jv\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.401525 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-config-data-custom\") pod \"barbican-api-55f9fccfc8-b52jv\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.402584 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a51a41e9-6984-493a-b3af-ecee435cc80f-logs\") pod \"barbican-api-55f9fccfc8-b52jv\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.403151 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s8vt\" (UniqueName: \"kubernetes.io/projected/a51a41e9-6984-493a-b3af-ecee435cc80f-kube-api-access-4s8vt\") pod \"barbican-api-55f9fccfc8-b52jv\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.403396 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-combined-ca-bundle\") pod \"barbican-api-55f9fccfc8-b52jv\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.403964 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-config-data\") pod \"barbican-api-55f9fccfc8-b52jv\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.423802 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-config-data-custom\") pod \"barbican-api-55f9fccfc8-b52jv\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.425964 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-config-data\") pod \"barbican-api-55f9fccfc8-b52jv\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.426190 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-combined-ca-bundle\") pod \"barbican-api-55f9fccfc8-b52jv\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.429619 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s8vt\" (UniqueName: \"kubernetes.io/projected/a51a41e9-6984-493a-b3af-ecee435cc80f-kube-api-access-4s8vt\") pod \"barbican-api-55f9fccfc8-b52jv\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.594970 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" event={"ID":"d7d605a9-5002-443e-b7d3-8e8cb7922d10","Type":"ContainerDied","Data":"310e959582154ca9aec094e69aea13c21b2a3839c6bfb622c3e461724c7f90db"} Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.595050 4784 scope.go:117] "RemoveContainer" containerID="3d4f4bb952328d1f23873c0cd87f9f9d3819f53848d99269277b7e75b003b372" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.595354 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qmblm" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.802702 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.838888 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.862477 4784 scope.go:117] "RemoveContainer" containerID="40c4a05a94f51a488463fa7f5b63f64f6b3c42d44559d059ac6f7cb94411647d" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.951422 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-config-data\") pod \"e52f206e-7230-4c60-a8c2-ad6cebabc434\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.952145 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-db-sync-config-data\") pod \"e52f206e-7230-4c60-a8c2-ad6cebabc434\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.952256 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knvfd\" (UniqueName: \"kubernetes.io/projected/e52f206e-7230-4c60-a8c2-ad6cebabc434-kube-api-access-knvfd\") pod \"e52f206e-7230-4c60-a8c2-ad6cebabc434\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.952476 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-combined-ca-bundle\") pod \"e52f206e-7230-4c60-a8c2-ad6cebabc434\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.952676 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-scripts\") pod \"e52f206e-7230-4c60-a8c2-ad6cebabc434\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.952815 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e52f206e-7230-4c60-a8c2-ad6cebabc434-etc-machine-id\") pod \"e52f206e-7230-4c60-a8c2-ad6cebabc434\" (UID: \"e52f206e-7230-4c60-a8c2-ad6cebabc434\") " Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.953502 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e52f206e-7230-4c60-a8c2-ad6cebabc434-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e52f206e-7230-4c60-a8c2-ad6cebabc434" (UID: "e52f206e-7230-4c60-a8c2-ad6cebabc434"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.972029 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e52f206e-7230-4c60-a8c2-ad6cebabc434" (UID: "e52f206e-7230-4c60-a8c2-ad6cebabc434"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.972220 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e52f206e-7230-4c60-a8c2-ad6cebabc434-kube-api-access-knvfd" (OuterVolumeSpecName: "kube-api-access-knvfd") pod "e52f206e-7230-4c60-a8c2-ad6cebabc434" (UID: "e52f206e-7230-4c60-a8c2-ad6cebabc434"). InnerVolumeSpecName "kube-api-access-knvfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:52 crc kubenswrapper[4784]: I0123 06:42:52.987090 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-scripts" (OuterVolumeSpecName: "scripts") pod "e52f206e-7230-4c60-a8c2-ad6cebabc434" (UID: "e52f206e-7230-4c60-a8c2-ad6cebabc434"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.030604 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e52f206e-7230-4c60-a8c2-ad6cebabc434" (UID: "e52f206e-7230-4c60-a8c2-ad6cebabc434"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.055604 4784 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.055646 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knvfd\" (UniqueName: \"kubernetes.io/projected/e52f206e-7230-4c60-a8c2-ad6cebabc434-kube-api-access-knvfd\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.055657 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.055667 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.055677 4784 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e52f206e-7230-4c60-a8c2-ad6cebabc434-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.071465 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-config-data" (OuterVolumeSpecName: "config-data") pod "e52f206e-7230-4c60-a8c2-ad6cebabc434" (UID: "e52f206e-7230-4c60-a8c2-ad6cebabc434"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.154327 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:42:53 crc kubenswrapper[4784]: W0123 06:42:53.159932 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod727a8885_767d_45ab_a5d7_52a44e0d3823.slice/crio-134f0aff254663fb7f545f38afdecb0d1a37c0b8ae2523b131287a931907dba2 WatchSource:0}: Error finding container 134f0aff254663fb7f545f38afdecb0d1a37c0b8ae2523b131287a931907dba2: Status 404 returned error can't find the container with id 134f0aff254663fb7f545f38afdecb0d1a37c0b8ae2523b131287a931907dba2 Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.167452 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52f206e-7230-4c60-a8c2-ad6cebabc434-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.179604 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qmblm"] Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.193869 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qmblm"] Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.275899 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7d605a9-5002-443e-b7d3-8e8cb7922d10" path="/var/lib/kubelet/pods/d7d605a9-5002-443e-b7d3-8e8cb7922d10/volumes" Jan 23 06:42:53 crc kubenswrapper[4784]: E0123 06:42:53.310892 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" Jan 23 06:42:53 crc kubenswrapper[4784]: W0123 06:42:53.599592 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06b51980_afe8_4434_b345_022a3be8f449.slice/crio-2d4bc549539ef3b0edae8305f3bd44274f0571cf3de2176197140fe44f36f7f8 WatchSource:0}: Error finding container 2d4bc549539ef3b0edae8305f3bd44274f0571cf3de2176197140fe44f36f7f8: Status 404 returned error can't find the container with id 2d4bc549539ef3b0edae8305f3bd44274f0571cf3de2176197140fe44f36f7f8 Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.603959 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.604026 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.609124 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.609264 4784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.616740 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fd9757bf9-7tmmd"] Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.650082 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tvpzc" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.651378 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tvpzc" event={"ID":"e52f206e-7230-4c60-a8c2-ad6cebabc434","Type":"ContainerDied","Data":"6f84ad448a3aa46e346d1ec998c6283c889b41c454bc562176fc403bca41584a"} Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.651445 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f84ad448a3aa46e346d1ec998c6283c889b41c454bc562176fc403bca41584a" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.662070 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-64f9677dc8-64nzl"] Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.663479 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd9757bf9-7tmmd" event={"ID":"06b51980-afe8-4434-b345-022a3be8f449","Type":"ContainerStarted","Data":"2d4bc549539ef3b0edae8305f3bd44274f0571cf3de2176197140fe44f36f7f8"} Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.683447 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.684333 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" event={"ID":"11cf409f-f9ae-4e80-87db-66495679cf86","Type":"ContainerStarted","Data":"6f96780974e371d0ec2e305e1fe5f1b6989434f33a90950819ad98e8018bc385"} Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.695299 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wk74n"] Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.696826 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"727a8885-767d-45ab-a5d7-52a44e0d3823","Type":"ContainerStarted","Data":"134f0aff254663fb7f545f38afdecb0d1a37c0b8ae2523b131287a931907dba2"} Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.701305 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a78b6d3-fcc8-4cc3-a549-c0ba13460333","Type":"ContainerStarted","Data":"3573b2499f126073c97b800c170575f8674ea77a90aaaf05727e900fa83917e0"} Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.701771 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" containerName="ceilometer-notification-agent" containerID="cri-o://115347ccb5cd70bbf37dcefd06282d0840c29164d97d83ffad76d50c522ddca9" gracePeriod=30 Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.702038 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.702147 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" containerName="sg-core" containerID="cri-o://692ee3f328ad7a4c42001a6352a2335f90e697f9407ef9e436036b9e4a045645" gracePeriod=30 Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.702347 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" containerName="proxy-httpd" containerID="cri-o://3573b2499f126073c97b800c170575f8674ea77a90aaaf05727e900fa83917e0" gracePeriod=30 Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.983056 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="7f4508f4-6ead-496d-8449-fe100d604c5b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.163:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:42:53 crc kubenswrapper[4784]: I0123 06:42:53.983714 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="7f4508f4-6ead-496d-8449-fe100d604c5b" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:42:54 crc kubenswrapper[4784]: W0123 06:42:54.077619 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda51a41e9_6984_493a_b3af_ecee435cc80f.slice/crio-1b8c949707d53d20813248453c82f272ce5f46af4fdd2b4f8aa5a4eeecc61aa8 WatchSource:0}: Error finding container 1b8c949707d53d20813248453c82f272ce5f46af4fdd2b4f8aa5a4eeecc61aa8: Status 404 returned error can't find the container with id 1b8c949707d53d20813248453c82f272ce5f46af4fdd2b4f8aa5a4eeecc61aa8 Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.112830 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55f9fccfc8-b52jv"] Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.374298 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:42:54 crc kubenswrapper[4784]: E0123 06:42:54.375646 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52f206e-7230-4c60-a8c2-ad6cebabc434" containerName="cinder-db-sync" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.375666 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52f206e-7230-4c60-a8c2-ad6cebabc434" containerName="cinder-db-sync" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.375974 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="e52f206e-7230-4c60-a8c2-ad6cebabc434" containerName="cinder-db-sync" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.387542 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.393395 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-47crq" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.394399 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.394598 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.416173 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.445006 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.445068 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kkqm\" (UniqueName: \"kubernetes.io/projected/86f8ec13-a652-4f8d-83c9-c278bfbea888-kube-api-access-8kkqm\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.445264 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-config-data\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.445319 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.445547 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86f8ec13-a652-4f8d-83c9-c278bfbea888-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.445622 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-scripts\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.460834 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wk74n"] Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.503002 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.542832 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vjn7m"] Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.545122 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.549232 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.549285 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kkqm\" (UniqueName: \"kubernetes.io/projected/86f8ec13-a652-4f8d-83c9-c278bfbea888-kube-api-access-8kkqm\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.549361 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-config-data\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.549404 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.549501 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86f8ec13-a652-4f8d-83c9-c278bfbea888-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.549544 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-scripts\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.556346 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86f8ec13-a652-4f8d-83c9-c278bfbea888-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.558833 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-scripts\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.561245 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.565031 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.571248 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-config-data\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.612102 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vjn7m"] Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.632800 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.660968 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.661098 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.664955 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kkqm\" (UniqueName: \"kubernetes.io/projected/86f8ec13-a652-4f8d-83c9-c278bfbea888-kube-api-access-8kkqm\") pod \"cinder-scheduler-0\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.665029 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.665413 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq6mw\" (UniqueName: \"kubernetes.io/projected/750febeb-10c6-4c60-b3a8-de1e417213f4-kube-api-access-cq6mw\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.665555 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-config\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.665698 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.711564 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.715652 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.720109 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.743004 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.778698 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"727a8885-767d-45ab-a5d7-52a44e0d3823","Type":"ContainerStarted","Data":"1d557290414aad4bb6575fa54eecfd66169ceaad67c86ea15656278ddbd13988"} Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.782318 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.784180 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-wk74n" podUID="4f807a9c-4dc2-4c2e-9e11-f6a433e4d612" containerName="init" containerID="cri-o://61081f7fe9645fdd853eea25499edfb56ff68b3c9cace87adfa43455cabb7e89" gracePeriod=10 Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.784317 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-wk74n" event={"ID":"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612","Type":"ContainerStarted","Data":"8100bdf3091ecce495283b5b5ad1736a4fabd54edd7bac1e65bdf3d6d2081040"} Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.787598 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq6mw\" (UniqueName: \"kubernetes.io/projected/750febeb-10c6-4c60-b3a8-de1e417213f4-kube-api-access-cq6mw\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.788060 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-config\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.788094 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-config-data\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.788373 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.788761 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.788794 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-scripts\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.788833 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-config-data-custom\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.788865 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80158c8a-929a-4c3a-870a-74970ca7a7ef-etc-machine-id\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.789739 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.792202 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.792380 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrk7p\" (UniqueName: \"kubernetes.io/projected/80158c8a-929a-4c3a-870a-74970ca7a7ef-kube-api-access-vrk7p\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.792455 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80158c8a-929a-4c3a-870a-74970ca7a7ef-logs\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.792508 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.791500 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.795587 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-config\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.796727 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.798983 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.808290 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.833876 4784 generic.go:334] "Generic (PLEG): container finished" podID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" containerID="3573b2499f126073c97b800c170575f8674ea77a90aaaf05727e900fa83917e0" exitCode=0 Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.833940 4784 generic.go:334] "Generic (PLEG): container finished" podID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" containerID="692ee3f328ad7a4c42001a6352a2335f90e697f9407ef9e436036b9e4a045645" exitCode=2 Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.835840 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a78b6d3-fcc8-4cc3-a549-c0ba13460333","Type":"ContainerDied","Data":"3573b2499f126073c97b800c170575f8674ea77a90aaaf05727e900fa83917e0"} Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.836921 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a78b6d3-fcc8-4cc3-a549-c0ba13460333","Type":"ContainerDied","Data":"692ee3f328ad7a4c42001a6352a2335f90e697f9407ef9e436036b9e4a045645"} Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.852498 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq6mw\" (UniqueName: \"kubernetes.io/projected/750febeb-10c6-4c60-b3a8-de1e417213f4-kube-api-access-cq6mw\") pod \"dnsmasq-dns-5c9776ccc5-vjn7m\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.857352 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f9fccfc8-b52jv" event={"ID":"a51a41e9-6984-493a-b3af-ecee435cc80f","Type":"ContainerStarted","Data":"1b8c949707d53d20813248453c82f272ce5f46af4fdd2b4f8aa5a4eeecc61aa8"} Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.895773 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-scripts\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.895830 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-config-data-custom\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.895859 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80158c8a-929a-4c3a-870a-74970ca7a7ef-etc-machine-id\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.895889 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.895944 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrk7p\" (UniqueName: \"kubernetes.io/projected/80158c8a-929a-4c3a-870a-74970ca7a7ef-kube-api-access-vrk7p\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.896019 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80158c8a-929a-4c3a-870a-74970ca7a7ef-logs\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.896098 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-config-data\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.897922 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80158c8a-929a-4c3a-870a-74970ca7a7ef-etc-machine-id\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.899725 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80158c8a-929a-4c3a-870a-74970ca7a7ef-logs\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.903890 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-config-data\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.906887 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-scripts\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.909906 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-config-data-custom\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.909977 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:54 crc kubenswrapper[4784]: I0123 06:42:54.939000 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrk7p\" (UniqueName: \"kubernetes.io/projected/80158c8a-929a-4c3a-870a-74970ca7a7ef-kube-api-access-vrk7p\") pod \"cinder-api-0\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " pod="openstack/cinder-api-0" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.021322 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.037823 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-866b9d495-7tw9h"] Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.038164 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-866b9d495-7tw9h" podUID="cf03953b-09e0-4872-ba7a-cacf7673f1af" containerName="neutron-api" containerID="cri-o://de88068fdf5c7551503870f1babedd16c493e4c395d7b45804683d418532131c" gracePeriod=30 Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.038344 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-866b9d495-7tw9h" podUID="cf03953b-09e0-4872-ba7a-cacf7673f1af" containerName="neutron-httpd" containerID="cri-o://6debc11b551b0e401c6fe0c115d122bbef9f8cddf7388e614fcd2cb0dd28f06e" gracePeriod=30 Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.096914 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.101429 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c68c7795c-7p5x6"] Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.104076 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.115495 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c68c7795c-7p5x6"] Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.185555 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-866b9d495-7tw9h" podUID="cf03953b-09e0-4872-ba7a-cacf7673f1af" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.168:9696/\": read tcp 10.217.0.2:33446->10.217.0.168:9696: read: connection reset by peer" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.207221 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-ovndb-tls-certs\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.207315 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-combined-ca-bundle\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.207357 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-public-tls-certs\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.207441 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk6vv\" (UniqueName: \"kubernetes.io/projected/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-kube-api-access-lk6vv\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.207475 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-internal-tls-certs\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.207537 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-config\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.207566 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-httpd-config\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.311311 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk6vv\" (UniqueName: \"kubernetes.io/projected/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-kube-api-access-lk6vv\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.311825 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-internal-tls-certs\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.311891 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-config\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.311909 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-httpd-config\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.312007 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-ovndb-tls-certs\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.312073 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-combined-ca-bundle\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.312124 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-public-tls-certs\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.336364 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-config\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.361134 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-public-tls-certs\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.362035 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk6vv\" (UniqueName: \"kubernetes.io/projected/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-kube-api-access-lk6vv\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.378323 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-combined-ca-bundle\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.378387 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-httpd-config\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.381920 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-internal-tls-certs\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.439521 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d7a679e-f4a7-4d19-89bb-2140b97e32ed-ovndb-tls-certs\") pod \"neutron-6c68c7795c-7p5x6\" (UID: \"5d7a679e-f4a7-4d19-89bb-2140b97e32ed\") " pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.639706 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:42:55 crc kubenswrapper[4784]: W0123 06:42:55.662523 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86f8ec13_a652_4f8d_83c9_c278bfbea888.slice/crio-52851fb4a6f2253298ec351f6ac83134cf0e2f99d17a2010c26d580999ebd12e WatchSource:0}: Error finding container 52851fb4a6f2253298ec351f6ac83134cf0e2f99d17a2010c26d580999ebd12e: Status 404 returned error can't find the container with id 52851fb4a6f2253298ec351f6ac83134cf0e2f99d17a2010c26d580999ebd12e Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.692038 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.767007 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.844520 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-dns-swift-storage-0\") pod \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.844675 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-ovsdbserver-nb\") pod \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.844778 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-dns-svc\") pod \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.844939 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpqtx\" (UniqueName: \"kubernetes.io/projected/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-kube-api-access-tpqtx\") pod \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.845010 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-config\") pod \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.845048 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-ovsdbserver-sb\") pod \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\" (UID: \"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612\") " Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.887037 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-kube-api-access-tpqtx" (OuterVolumeSpecName: "kube-api-access-tpqtx") pod "4f807a9c-4dc2-4c2e-9e11-f6a433e4d612" (UID: "4f807a9c-4dc2-4c2e-9e11-f6a433e4d612"). InnerVolumeSpecName "kube-api-access-tpqtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.938285 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f807a9c-4dc2-4c2e-9e11-f6a433e4d612" (UID: "4f807a9c-4dc2-4c2e-9e11-f6a433e4d612"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.950024 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.950066 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpqtx\" (UniqueName: \"kubernetes.io/projected/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-kube-api-access-tpqtx\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.960520 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4f807a9c-4dc2-4c2e-9e11-f6a433e4d612" (UID: "4f807a9c-4dc2-4c2e-9e11-f6a433e4d612"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.979514 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vjn7m"] Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.991077 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86f8ec13-a652-4f8d-83c9-c278bfbea888","Type":"ContainerStarted","Data":"52851fb4a6f2253298ec351f6ac83134cf0e2f99d17a2010c26d580999ebd12e"} Jan 23 06:42:55 crc kubenswrapper[4784]: I0123 06:42:55.996704 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.001889 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4f807a9c-4dc2-4c2e-9e11-f6a433e4d612" (UID: "4f807a9c-4dc2-4c2e-9e11-f6a433e4d612"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.012917 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4f807a9c-4dc2-4c2e-9e11-f6a433e4d612" (UID: "4f807a9c-4dc2-4c2e-9e11-f6a433e4d612"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.013820 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-config" (OuterVolumeSpecName: "config") pod "4f807a9c-4dc2-4c2e-9e11-f6a433e4d612" (UID: "4f807a9c-4dc2-4c2e-9e11-f6a433e4d612"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.020675 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"727a8885-767d-45ab-a5d7-52a44e0d3823","Type":"ContainerStarted","Data":"e685cc38f4965e1a393e2864e07f8dba584f47a9f6e587cfb25353b74f742fe0"} Jan 23 06:42:56 crc kubenswrapper[4784]: W0123 06:42:56.020826 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod750febeb_10c6_4c60_b3a8_de1e417213f4.slice/crio-30856eab4bba6d5aeb5d22a34b48f1d4d11986fc547858eeb6c168ef5921507e WatchSource:0}: Error finding container 30856eab4bba6d5aeb5d22a34b48f1d4d11986fc547858eeb6c168ef5921507e: Status 404 returned error can't find the container with id 30856eab4bba6d5aeb5d22a34b48f1d4d11986fc547858eeb6c168ef5921507e Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.021393 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.029031 4784 generic.go:334] "Generic (PLEG): container finished" podID="4f807a9c-4dc2-4c2e-9e11-f6a433e4d612" containerID="61081f7fe9645fdd853eea25499edfb56ff68b3c9cace87adfa43455cabb7e89" exitCode=0 Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.029432 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-wk74n" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.030332 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-wk74n" event={"ID":"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612","Type":"ContainerDied","Data":"61081f7fe9645fdd853eea25499edfb56ff68b3c9cace87adfa43455cabb7e89"} Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.030410 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-wk74n" event={"ID":"4f807a9c-4dc2-4c2e-9e11-f6a433e4d612","Type":"ContainerDied","Data":"8100bdf3091ecce495283b5b5ad1736a4fabd54edd7bac1e65bdf3d6d2081040"} Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.030437 4784 scope.go:117] "RemoveContainer" containerID="61081f7fe9645fdd853eea25499edfb56ff68b3c9cace87adfa43455cabb7e89" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.062139 4784 generic.go:334] "Generic (PLEG): container finished" podID="cf03953b-09e0-4872-ba7a-cacf7673f1af" containerID="6debc11b551b0e401c6fe0c115d122bbef9f8cddf7388e614fcd2cb0dd28f06e" exitCode=0 Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.062234 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-866b9d495-7tw9h" event={"ID":"cf03953b-09e0-4872-ba7a-cacf7673f1af","Type":"ContainerDied","Data":"6debc11b551b0e401c6fe0c115d122bbef9f8cddf7388e614fcd2cb0dd28f06e"} Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.065240 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=6.065211743 podStartE2EDuration="6.065211743s" podCreationTimestamp="2026-01-23 06:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:42:56.057021192 +0000 UTC m=+1379.289529166" watchObservedRunningTime="2026-01-23 06:42:56.065211743 +0000 UTC m=+1379.297719717" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.065874 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.068941 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.068957 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.068968 4784 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.075614 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f9fccfc8-b52jv" event={"ID":"a51a41e9-6984-493a-b3af-ecee435cc80f","Type":"ContainerStarted","Data":"c18706aa3c59bb38a1cf7fe810c8f06528e4adf9db2d859d3fa2ca4cfd2be25f"} Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.075670 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f9fccfc8-b52jv" event={"ID":"a51a41e9-6984-493a-b3af-ecee435cc80f","Type":"ContainerStarted","Data":"4f3ee2742778e4210542d63f23262b55548233bfeb377a27e1be31fc301fb85b"} Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.076732 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.076786 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.137056 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-55f9fccfc8-b52jv" podStartSLOduration=5.137027321 podStartE2EDuration="5.137027321s" podCreationTimestamp="2026-01-23 06:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:42:56.121900228 +0000 UTC m=+1379.354408212" watchObservedRunningTime="2026-01-23 06:42:56.137027321 +0000 UTC m=+1379.369535295" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.303545 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wk74n"] Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.327916 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wk74n"] Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.378071 4784 scope.go:117] "RemoveContainer" containerID="61081f7fe9645fdd853eea25499edfb56ff68b3c9cace87adfa43455cabb7e89" Jan 23 06:42:56 crc kubenswrapper[4784]: E0123 06:42:56.378780 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61081f7fe9645fdd853eea25499edfb56ff68b3c9cace87adfa43455cabb7e89\": container with ID starting with 61081f7fe9645fdd853eea25499edfb56ff68b3c9cace87adfa43455cabb7e89 not found: ID does not exist" containerID="61081f7fe9645fdd853eea25499edfb56ff68b3c9cace87adfa43455cabb7e89" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.378824 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61081f7fe9645fdd853eea25499edfb56ff68b3c9cace87adfa43455cabb7e89"} err="failed to get container status \"61081f7fe9645fdd853eea25499edfb56ff68b3c9cace87adfa43455cabb7e89\": rpc error: code = NotFound desc = could not find container \"61081f7fe9645fdd853eea25499edfb56ff68b3c9cace87adfa43455cabb7e89\": container with ID starting with 61081f7fe9645fdd853eea25499edfb56ff68b3c9cace87adfa43455cabb7e89 not found: ID does not exist" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.453019 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-79d47d6854-hfx9p" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.560208 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-65775dd4cd-wxtf2" podUID="9a1391cd-fdf4-4770-ba43-17cb0657e117" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 23 06:42:56 crc kubenswrapper[4784]: I0123 06:42:56.903510 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c68c7795c-7p5x6"] Jan 23 06:42:57 crc kubenswrapper[4784]: I0123 06:42:57.221461 4784 generic.go:334] "Generic (PLEG): container finished" podID="750febeb-10c6-4c60-b3a8-de1e417213f4" containerID="febd32782652ae77f74831c89fda60766f412d3ad4a5c80b91d35d58b9c1e39a" exitCode=0 Jan 23 06:42:57 crc kubenswrapper[4784]: I0123 06:42:57.224851 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" event={"ID":"750febeb-10c6-4c60-b3a8-de1e417213f4","Type":"ContainerDied","Data":"febd32782652ae77f74831c89fda60766f412d3ad4a5c80b91d35d58b9c1e39a"} Jan 23 06:42:57 crc kubenswrapper[4784]: I0123 06:42:57.225415 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" event={"ID":"750febeb-10c6-4c60-b3a8-de1e417213f4","Type":"ContainerStarted","Data":"30856eab4bba6d5aeb5d22a34b48f1d4d11986fc547858eeb6c168ef5921507e"} Jan 23 06:42:57 crc kubenswrapper[4784]: I0123 06:42:57.315683 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-866b9d495-7tw9h" podUID="cf03953b-09e0-4872-ba7a-cacf7673f1af" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.168:9696/\": dial tcp 10.217.0.168:9696: connect: connection refused" Jan 23 06:42:57 crc kubenswrapper[4784]: I0123 06:42:57.360252 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f807a9c-4dc2-4c2e-9e11-f6a433e4d612" path="/var/lib/kubelet/pods/4f807a9c-4dc2-4c2e-9e11-f6a433e4d612/volumes" Jan 23 06:42:57 crc kubenswrapper[4784]: I0123 06:42:57.361102 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"80158c8a-929a-4c3a-870a-74970ca7a7ef","Type":"ContainerStarted","Data":"f16c47192ce9b79ea3b786c91ad551587e81713bbeb6bc332daf5ea230b2d1e8"} Jan 23 06:42:58 crc kubenswrapper[4784]: I0123 06:42:58.322583 4784 generic.go:334] "Generic (PLEG): container finished" podID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" containerID="115347ccb5cd70bbf37dcefd06282d0840c29164d97d83ffad76d50c522ddca9" exitCode=0 Jan 23 06:42:58 crc kubenswrapper[4784]: I0123 06:42:58.322648 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a78b6d3-fcc8-4cc3-a549-c0ba13460333","Type":"ContainerDied","Data":"115347ccb5cd70bbf37dcefd06282d0840c29164d97d83ffad76d50c522ddca9"} Jan 23 06:42:58 crc kubenswrapper[4784]: I0123 06:42:58.328828 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"80158c8a-929a-4c3a-870a-74970ca7a7ef","Type":"ContainerStarted","Data":"fa467917079355ee58c0e4421d873c1c316ec477f0b056722cb52a55a9b61e94"} Jan 23 06:42:58 crc kubenswrapper[4784]: I0123 06:42:58.456773 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.285623 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-b88c57956-78khw"] Jan 23 06:42:59 crc kubenswrapper[4784]: E0123 06:42:59.286625 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f807a9c-4dc2-4c2e-9e11-f6a433e4d612" containerName="init" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.286657 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f807a9c-4dc2-4c2e-9e11-f6a433e4d612" containerName="init" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.287590 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f807a9c-4dc2-4c2e-9e11-f6a433e4d612" containerName="init" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.289274 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b88c57956-78khw"] Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.289431 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.296860 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.297161 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.386066 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwccf\" (UniqueName: \"kubernetes.io/projected/56a8c456-d460-464b-9425-0d5878f12ba5-kube-api-access-kwccf\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.388251 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56a8c456-d460-464b-9425-0d5878f12ba5-public-tls-certs\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.388360 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56a8c456-d460-464b-9425-0d5878f12ba5-combined-ca-bundle\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.390019 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56a8c456-d460-464b-9425-0d5878f12ba5-logs\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.390142 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56a8c456-d460-464b-9425-0d5878f12ba5-internal-tls-certs\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.390190 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56a8c456-d460-464b-9425-0d5878f12ba5-config-data-custom\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.390474 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56a8c456-d460-464b-9425-0d5878f12ba5-config-data\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.420865 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c68c7795c-7p5x6" event={"ID":"5d7a679e-f4a7-4d19-89bb-2140b97e32ed","Type":"ContainerStarted","Data":"7dcdca586c0d71adef1948e42a9ae51888de2f69bd701b7823e636ade858bac0"} Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.493073 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwccf\" (UniqueName: \"kubernetes.io/projected/56a8c456-d460-464b-9425-0d5878f12ba5-kube-api-access-kwccf\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.493176 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56a8c456-d460-464b-9425-0d5878f12ba5-public-tls-certs\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.493202 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56a8c456-d460-464b-9425-0d5878f12ba5-combined-ca-bundle\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.493226 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56a8c456-d460-464b-9425-0d5878f12ba5-logs\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.493263 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56a8c456-d460-464b-9425-0d5878f12ba5-internal-tls-certs\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.493282 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56a8c456-d460-464b-9425-0d5878f12ba5-config-data-custom\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.493338 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56a8c456-d460-464b-9425-0d5878f12ba5-config-data\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.494269 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56a8c456-d460-464b-9425-0d5878f12ba5-logs\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.509653 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56a8c456-d460-464b-9425-0d5878f12ba5-config-data-custom\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.510293 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56a8c456-d460-464b-9425-0d5878f12ba5-public-tls-certs\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.511111 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56a8c456-d460-464b-9425-0d5878f12ba5-combined-ca-bundle\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.512557 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56a8c456-d460-464b-9425-0d5878f12ba5-config-data\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.515575 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56a8c456-d460-464b-9425-0d5878f12ba5-internal-tls-certs\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.520429 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwccf\" (UniqueName: \"kubernetes.io/projected/56a8c456-d460-464b-9425-0d5878f12ba5-kube-api-access-kwccf\") pod \"barbican-api-b88c57956-78khw\" (UID: \"56a8c456-d460-464b-9425-0d5878f12ba5\") " pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.745046 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.762008 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.908303 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-combined-ca-bundle\") pod \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.908390 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-scripts\") pod \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.908542 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-log-httpd\") pod \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.908607 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-run-httpd\") pod \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.908667 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-config-data\") pod \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.908824 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-sg-core-conf-yaml\") pod \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.909029 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmrfk\" (UniqueName: \"kubernetes.io/projected/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-kube-api-access-kmrfk\") pod \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\" (UID: \"3a78b6d3-fcc8-4cc3-a549-c0ba13460333\") " Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.909883 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3a78b6d3-fcc8-4cc3-a549-c0ba13460333" (UID: "3a78b6d3-fcc8-4cc3-a549-c0ba13460333"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.910130 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3a78b6d3-fcc8-4cc3-a549-c0ba13460333" (UID: "3a78b6d3-fcc8-4cc3-a549-c0ba13460333"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.915198 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-kube-api-access-kmrfk" (OuterVolumeSpecName: "kube-api-access-kmrfk") pod "3a78b6d3-fcc8-4cc3-a549-c0ba13460333" (UID: "3a78b6d3-fcc8-4cc3-a549-c0ba13460333"). InnerVolumeSpecName "kube-api-access-kmrfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:42:59 crc kubenswrapper[4784]: I0123 06:42:59.918055 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-scripts" (OuterVolumeSpecName: "scripts") pod "3a78b6d3-fcc8-4cc3-a549-c0ba13460333" (UID: "3a78b6d3-fcc8-4cc3-a549-c0ba13460333"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.066406 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmrfk\" (UniqueName: \"kubernetes.io/projected/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-kube-api-access-kmrfk\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.066880 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.066897 4784 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.066909 4784 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.067727 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3a78b6d3-fcc8-4cc3-a549-c0ba13460333" (UID: "3a78b6d3-fcc8-4cc3-a549-c0ba13460333"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.084831 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a78b6d3-fcc8-4cc3-a549-c0ba13460333" (UID: "3a78b6d3-fcc8-4cc3-a549-c0ba13460333"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.156980 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-config-data" (OuterVolumeSpecName: "config-data") pod "3a78b6d3-fcc8-4cc3-a549-c0ba13460333" (UID: "3a78b6d3-fcc8-4cc3-a549-c0ba13460333"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.159714 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.169246 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.169280 4784 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.169292 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a78b6d3-fcc8-4cc3-a549-c0ba13460333-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.459385 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b88c57956-78khw"] Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.465682 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd9757bf9-7tmmd" event={"ID":"06b51980-afe8-4434-b345-022a3be8f449","Type":"ContainerStarted","Data":"83161ffbe7a209396185e3d8e951aedbf2cfed3663436c32acb8853c185a19f0"} Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.476074 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" event={"ID":"750febeb-10c6-4c60-b3a8-de1e417213f4","Type":"ContainerStarted","Data":"e10ba1bb494c3c49f0632a1f5c80940d6c7cee912fac556a037cd7f4cf53d8f3"} Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.479270 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.495711 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c68c7795c-7p5x6" event={"ID":"5d7a679e-f4a7-4d19-89bb-2140b97e32ed","Type":"ContainerStarted","Data":"c048108c6d354e696333eb0afe59945e8ec4dc3ef5846ba27f1e4df3fc845fd9"} Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.508262 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3a78b6d3-fcc8-4cc3-a549-c0ba13460333","Type":"ContainerDied","Data":"bf023f14f1a80b339c6f2a5bb4c08aadbcd1a8088f73402d075d5b64d54f307b"} Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.508341 4784 scope.go:117] "RemoveContainer" containerID="3573b2499f126073c97b800c170575f8674ea77a90aaaf05727e900fa83917e0" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.508584 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.527447 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" podStartSLOduration=6.5274091819999995 podStartE2EDuration="6.527409182s" podCreationTimestamp="2026-01-23 06:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:43:00.509830319 +0000 UTC m=+1383.742338293" watchObservedRunningTime="2026-01-23 06:43:00.527409182 +0000 UTC m=+1383.759917176" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.639288 4784 scope.go:117] "RemoveContainer" containerID="692ee3f328ad7a4c42001a6352a2335f90e697f9407ef9e436036b9e4a045645" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.734829 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.761119 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.782231 4784 scope.go:117] "RemoveContainer" containerID="115347ccb5cd70bbf37dcefd06282d0840c29164d97d83ffad76d50c522ddca9" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.795996 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:43:00 crc kubenswrapper[4784]: E0123 06:43:00.797404 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" containerName="sg-core" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.797457 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" containerName="sg-core" Jan 23 06:43:00 crc kubenswrapper[4784]: E0123 06:43:00.797481 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" containerName="proxy-httpd" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.797490 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" containerName="proxy-httpd" Jan 23 06:43:00 crc kubenswrapper[4784]: E0123 06:43:00.797545 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" containerName="ceilometer-notification-agent" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.797555 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" containerName="ceilometer-notification-agent" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.797833 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" containerName="ceilometer-notification-agent" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.797880 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" containerName="proxy-httpd" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.797902 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" containerName="sg-core" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.801872 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.805177 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.806163 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.814334 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.903723 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-run-httpd\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.904318 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-config-data\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.904364 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.904409 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-log-httpd\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.904485 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jgbz\" (UniqueName: \"kubernetes.io/projected/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-kube-api-access-9jgbz\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.904520 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:00 crc kubenswrapper[4784]: I0123 06:43:00.904581 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-scripts\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.007429 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jgbz\" (UniqueName: \"kubernetes.io/projected/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-kube-api-access-9jgbz\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.007520 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.008768 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-scripts\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.008878 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-run-httpd\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.008916 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-config-data\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.008982 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.009062 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-log-httpd\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.009637 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-run-httpd\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.009700 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-log-httpd\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.012885 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.012964 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.016445 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.017826 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-config-data\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.019586 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.019959 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-scripts\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.034132 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.043193 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jgbz\" (UniqueName: \"kubernetes.io/projected/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-kube-api-access-9jgbz\") pod \"ceilometer-0\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.159587 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.286744 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a78b6d3-fcc8-4cc3-a549-c0ba13460333" path="/var/lib/kubelet/pods/3a78b6d3-fcc8-4cc3-a549-c0ba13460333/volumes" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.532389 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.555166 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" event={"ID":"11cf409f-f9ae-4e80-87db-66495679cf86","Type":"ContainerStarted","Data":"700b56fb41339b123e2bf4957deb84ca40aa43572c0b5118f36adfd7413a71b9"} Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.555230 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" event={"ID":"11cf409f-f9ae-4e80-87db-66495679cf86","Type":"ContainerStarted","Data":"636eadba9574c702a88ba33dc2bdf35bc8ba603d503185a28f12fe0441816d9d"} Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.578632 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86f8ec13-a652-4f8d-83c9-c278bfbea888","Type":"ContainerStarted","Data":"fe28b99b7850613664106e1c6ac225f747959ef5a0b0c51ebd3bef9d7e7d13b8"} Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.617452 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-64f9677dc8-64nzl" podStartSLOduration=4.759848595 podStartE2EDuration="10.617421095s" podCreationTimestamp="2026-01-23 06:42:51 +0000 UTC" firstStartedPulling="2026-01-23 06:42:53.609559091 +0000 UTC m=+1376.842067065" lastFinishedPulling="2026-01-23 06:42:59.467131571 +0000 UTC m=+1382.699639565" observedRunningTime="2026-01-23 06:43:01.615228951 +0000 UTC m=+1384.847736925" watchObservedRunningTime="2026-01-23 06:43:01.617421095 +0000 UTC m=+1384.849929069" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.634713 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b88c57956-78khw" event={"ID":"56a8c456-d460-464b-9425-0d5878f12ba5","Type":"ContainerStarted","Data":"80416c554b0f40faee2d2b62228d69be5a62f84eee6022db9ef634ef826ba961"} Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.634799 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b88c57956-78khw" event={"ID":"56a8c456-d460-464b-9425-0d5878f12ba5","Type":"ContainerStarted","Data":"a6e3ecf9c5e8b991d46ebef64d52584276d02ee1efd5d6d03aa1d264744cd4ea"} Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.652545 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c68c7795c-7p5x6" event={"ID":"5d7a679e-f4a7-4d19-89bb-2140b97e32ed","Type":"ContainerStarted","Data":"f70f71427e5fee70352e92c74ddf9189bff2a9a9af68fbe4187d0bf85fbfe062"} Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.653437 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.675070 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"80158c8a-929a-4c3a-870a-74970ca7a7ef","Type":"ContainerStarted","Data":"6809ca070c2f70033bcdbcea9c9dcdec5a8b43d3c1e6c1236b35a8b1a70b56ff"} Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.675395 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="80158c8a-929a-4c3a-870a-74970ca7a7ef" containerName="cinder-api-log" containerID="cri-o://fa467917079355ee58c0e4421d873c1c316ec477f0b056722cb52a55a9b61e94" gracePeriod=30 Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.676852 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.676925 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="80158c8a-929a-4c3a-870a-74970ca7a7ef" containerName="cinder-api" containerID="cri-o://6809ca070c2f70033bcdbcea9c9dcdec5a8b43d3c1e6c1236b35a8b1a70b56ff" gracePeriod=30 Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.698903 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-internal-tls-certs\") pod \"cf03953b-09e0-4872-ba7a-cacf7673f1af\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.699886 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-config\") pod \"cf03953b-09e0-4872-ba7a-cacf7673f1af\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.700166 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-combined-ca-bundle\") pod \"cf03953b-09e0-4872-ba7a-cacf7673f1af\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.700300 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf5rc\" (UniqueName: \"kubernetes.io/projected/cf03953b-09e0-4872-ba7a-cacf7673f1af-kube-api-access-pf5rc\") pod \"cf03953b-09e0-4872-ba7a-cacf7673f1af\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.700434 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-httpd-config\") pod \"cf03953b-09e0-4872-ba7a-cacf7673f1af\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.700558 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-public-tls-certs\") pod \"cf03953b-09e0-4872-ba7a-cacf7673f1af\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.700652 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-ovndb-tls-certs\") pod \"cf03953b-09e0-4872-ba7a-cacf7673f1af\" (UID: \"cf03953b-09e0-4872-ba7a-cacf7673f1af\") " Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.757670 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf03953b-09e0-4872-ba7a-cacf7673f1af-kube-api-access-pf5rc" (OuterVolumeSpecName: "kube-api-access-pf5rc") pod "cf03953b-09e0-4872-ba7a-cacf7673f1af" (UID: "cf03953b-09e0-4872-ba7a-cacf7673f1af"). InnerVolumeSpecName "kube-api-access-pf5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.761962 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "cf03953b-09e0-4872-ba7a-cacf7673f1af" (UID: "cf03953b-09e0-4872-ba7a-cacf7673f1af"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.767373 4784 generic.go:334] "Generic (PLEG): container finished" podID="cf03953b-09e0-4872-ba7a-cacf7673f1af" containerID="de88068fdf5c7551503870f1babedd16c493e4c395d7b45804683d418532131c" exitCode=0 Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.767729 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-866b9d495-7tw9h" event={"ID":"cf03953b-09e0-4872-ba7a-cacf7673f1af","Type":"ContainerDied","Data":"de88068fdf5c7551503870f1babedd16c493e4c395d7b45804683d418532131c"} Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.767929 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-866b9d495-7tw9h" event={"ID":"cf03953b-09e0-4872-ba7a-cacf7673f1af","Type":"ContainerDied","Data":"08f825cbc34370070536549d9e9b5873ed4e0ff51c18da0071603fb823241fc4"} Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.767998 4784 scope.go:117] "RemoveContainer" containerID="6debc11b551b0e401c6fe0c115d122bbef9f8cddf7388e614fcd2cb0dd28f06e" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.768763 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-866b9d495-7tw9h" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.781310 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd9757bf9-7tmmd" event={"ID":"06b51980-afe8-4434-b345-022a3be8f449","Type":"ContainerStarted","Data":"655ef9662718a04df514c4a8a9abfb3b30fd948e6c7890cc5e5ca204b308637e"} Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.785144 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c68c7795c-7p5x6" podStartSLOduration=6.785108283 podStartE2EDuration="6.785108283s" podCreationTimestamp="2026-01-23 06:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:43:01.696076861 +0000 UTC m=+1384.928584825" watchObservedRunningTime="2026-01-23 06:43:01.785108283 +0000 UTC m=+1385.017616257" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.803223 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.803190228 podStartE2EDuration="7.803190228s" podCreationTimestamp="2026-01-23 06:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:43:01.771057037 +0000 UTC m=+1385.003565011" watchObservedRunningTime="2026-01-23 06:43:01.803190228 +0000 UTC m=+1385.035698202" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.806527 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.863000 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf5rc\" (UniqueName: \"kubernetes.io/projected/cf03953b-09e0-4872-ba7a-cacf7673f1af-kube-api-access-pf5rc\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.863216 4784 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.891800 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5fd9757bf9-7tmmd" podStartSLOduration=5.069526448 podStartE2EDuration="10.891746458s" podCreationTimestamp="2026-01-23 06:42:51 +0000 UTC" firstStartedPulling="2026-01-23 06:42:53.609867118 +0000 UTC m=+1376.842375092" lastFinishedPulling="2026-01-23 06:42:59.432087128 +0000 UTC m=+1382.664595102" observedRunningTime="2026-01-23 06:43:01.810118909 +0000 UTC m=+1385.042626883" watchObservedRunningTime="2026-01-23 06:43:01.891746458 +0000 UTC m=+1385.124254432" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.981407 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cf03953b-09e0-4872-ba7a-cacf7673f1af" (UID: "cf03953b-09e0-4872-ba7a-cacf7673f1af"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:01 crc kubenswrapper[4784]: I0123 06:43:01.990172 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.000026 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-config" (OuterVolumeSpecName: "config") pod "cf03953b-09e0-4872-ba7a-cacf7673f1af" (UID: "cf03953b-09e0-4872-ba7a-cacf7673f1af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.002237 4784 scope.go:117] "RemoveContainer" containerID="de88068fdf5c7551503870f1babedd16c493e4c395d7b45804683d418532131c" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.033658 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cf03953b-09e0-4872-ba7a-cacf7673f1af" (UID: "cf03953b-09e0-4872-ba7a-cacf7673f1af"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.036922 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf03953b-09e0-4872-ba7a-cacf7673f1af" (UID: "cf03953b-09e0-4872-ba7a-cacf7673f1af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.070907 4784 scope.go:117] "RemoveContainer" containerID="6debc11b551b0e401c6fe0c115d122bbef9f8cddf7388e614fcd2cb0dd28f06e" Jan 23 06:43:02 crc kubenswrapper[4784]: E0123 06:43:02.071568 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6debc11b551b0e401c6fe0c115d122bbef9f8cddf7388e614fcd2cb0dd28f06e\": container with ID starting with 6debc11b551b0e401c6fe0c115d122bbef9f8cddf7388e614fcd2cb0dd28f06e not found: ID does not exist" containerID="6debc11b551b0e401c6fe0c115d122bbef9f8cddf7388e614fcd2cb0dd28f06e" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.071605 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6debc11b551b0e401c6fe0c115d122bbef9f8cddf7388e614fcd2cb0dd28f06e"} err="failed to get container status \"6debc11b551b0e401c6fe0c115d122bbef9f8cddf7388e614fcd2cb0dd28f06e\": rpc error: code = NotFound desc = could not find container \"6debc11b551b0e401c6fe0c115d122bbef9f8cddf7388e614fcd2cb0dd28f06e\": container with ID starting with 6debc11b551b0e401c6fe0c115d122bbef9f8cddf7388e614fcd2cb0dd28f06e not found: ID does not exist" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.071633 4784 scope.go:117] "RemoveContainer" containerID="de88068fdf5c7551503870f1babedd16c493e4c395d7b45804683d418532131c" Jan 23 06:43:02 crc kubenswrapper[4784]: E0123 06:43:02.071975 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de88068fdf5c7551503870f1babedd16c493e4c395d7b45804683d418532131c\": container with ID starting with de88068fdf5c7551503870f1babedd16c493e4c395d7b45804683d418532131c not found: ID does not exist" containerID="de88068fdf5c7551503870f1babedd16c493e4c395d7b45804683d418532131c" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.071998 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de88068fdf5c7551503870f1babedd16c493e4c395d7b45804683d418532131c"} err="failed to get container status \"de88068fdf5c7551503870f1babedd16c493e4c395d7b45804683d418532131c\": rpc error: code = NotFound desc = could not find container \"de88068fdf5c7551503870f1babedd16c493e4c395d7b45804683d418532131c\": container with ID starting with de88068fdf5c7551503870f1babedd16c493e4c395d7b45804683d418532131c not found: ID does not exist" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.075252 4784 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.075281 4784 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.075293 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.075305 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.217100 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "cf03953b-09e0-4872-ba7a-cacf7673f1af" (UID: "cf03953b-09e0-4872-ba7a-cacf7673f1af"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.282342 4784 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf03953b-09e0-4872-ba7a-cacf7673f1af-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.430246 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-866b9d495-7tw9h"] Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.447197 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-866b9d495-7tw9h"] Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.792765 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c","Type":"ContainerStarted","Data":"f9e16b99114a8e4546677a2001edcea86773b0cc44d09f8ceb927e314a408dac"} Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.796724 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86f8ec13-a652-4f8d-83c9-c278bfbea888","Type":"ContainerStarted","Data":"8d3c8e08377ab88967aa77a662956ffbc247c4368de0429b06c380998ac44aec"} Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.801317 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b88c57956-78khw" event={"ID":"56a8c456-d460-464b-9425-0d5878f12ba5","Type":"ContainerStarted","Data":"31b9bafbf12cbd23daeeccba440c5d63265226320df68ffd9f107d5b240738cb"} Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.801438 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.801765 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.807816 4784 generic.go:334] "Generic (PLEG): container finished" podID="80158c8a-929a-4c3a-870a-74970ca7a7ef" containerID="6809ca070c2f70033bcdbcea9c9dcdec5a8b43d3c1e6c1236b35a8b1a70b56ff" exitCode=0 Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.807864 4784 generic.go:334] "Generic (PLEG): container finished" podID="80158c8a-929a-4c3a-870a-74970ca7a7ef" containerID="fa467917079355ee58c0e4421d873c1c316ec477f0b056722cb52a55a9b61e94" exitCode=143 Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.807960 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"80158c8a-929a-4c3a-870a-74970ca7a7ef","Type":"ContainerDied","Data":"6809ca070c2f70033bcdbcea9c9dcdec5a8b43d3c1e6c1236b35a8b1a70b56ff"} Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.808001 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"80158c8a-929a-4c3a-870a-74970ca7a7ef","Type":"ContainerDied","Data":"fa467917079355ee58c0e4421d873c1c316ec477f0b056722cb52a55a9b61e94"} Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.846099 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.084702846 podStartE2EDuration="8.846059391s" podCreationTimestamp="2026-01-23 06:42:54 +0000 UTC" firstStartedPulling="2026-01-23 06:42:55.674450044 +0000 UTC m=+1378.906958018" lastFinishedPulling="2026-01-23 06:42:59.435806589 +0000 UTC m=+1382.668314563" observedRunningTime="2026-01-23 06:43:02.822522212 +0000 UTC m=+1386.055030186" watchObservedRunningTime="2026-01-23 06:43:02.846059391 +0000 UTC m=+1386.078567365" Jan 23 06:43:02 crc kubenswrapper[4784]: I0123 06:43:02.886199 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-b88c57956-78khw" podStartSLOduration=3.886135887 podStartE2EDuration="3.886135887s" podCreationTimestamp="2026-01-23 06:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:43:02.861382318 +0000 UTC m=+1386.093890282" watchObservedRunningTime="2026-01-23 06:43:02.886135887 +0000 UTC m=+1386.118643881" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.349078 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf03953b-09e0-4872-ba7a-cacf7673f1af" path="/var/lib/kubelet/pods/cf03953b-09e0-4872-ba7a-cacf7673f1af/volumes" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.492714 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.632386 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-config-data\") pod \"80158c8a-929a-4c3a-870a-74970ca7a7ef\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.632448 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-combined-ca-bundle\") pod \"80158c8a-929a-4c3a-870a-74970ca7a7ef\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.632550 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80158c8a-929a-4c3a-870a-74970ca7a7ef-logs\") pod \"80158c8a-929a-4c3a-870a-74970ca7a7ef\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.632621 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80158c8a-929a-4c3a-870a-74970ca7a7ef-etc-machine-id\") pod \"80158c8a-929a-4c3a-870a-74970ca7a7ef\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.632691 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-scripts\") pod \"80158c8a-929a-4c3a-870a-74970ca7a7ef\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.632895 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrk7p\" (UniqueName: \"kubernetes.io/projected/80158c8a-929a-4c3a-870a-74970ca7a7ef-kube-api-access-vrk7p\") pod \"80158c8a-929a-4c3a-870a-74970ca7a7ef\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.632977 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-config-data-custom\") pod \"80158c8a-929a-4c3a-870a-74970ca7a7ef\" (UID: \"80158c8a-929a-4c3a-870a-74970ca7a7ef\") " Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.633539 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80158c8a-929a-4c3a-870a-74970ca7a7ef-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "80158c8a-929a-4c3a-870a-74970ca7a7ef" (UID: "80158c8a-929a-4c3a-870a-74970ca7a7ef"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.634004 4784 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80158c8a-929a-4c3a-870a-74970ca7a7ef-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.634066 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80158c8a-929a-4c3a-870a-74970ca7a7ef-logs" (OuterVolumeSpecName: "logs") pod "80158c8a-929a-4c3a-870a-74970ca7a7ef" (UID: "80158c8a-929a-4c3a-870a-74970ca7a7ef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.647377 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-scripts" (OuterVolumeSpecName: "scripts") pod "80158c8a-929a-4c3a-870a-74970ca7a7ef" (UID: "80158c8a-929a-4c3a-870a-74970ca7a7ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.652101 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "80158c8a-929a-4c3a-870a-74970ca7a7ef" (UID: "80158c8a-929a-4c3a-870a-74970ca7a7ef"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.690874 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80158c8a-929a-4c3a-870a-74970ca7a7ef-kube-api-access-vrk7p" (OuterVolumeSpecName: "kube-api-access-vrk7p") pod "80158c8a-929a-4c3a-870a-74970ca7a7ef" (UID: "80158c8a-929a-4c3a-870a-74970ca7a7ef"). InnerVolumeSpecName "kube-api-access-vrk7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.692020 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80158c8a-929a-4c3a-870a-74970ca7a7ef" (UID: "80158c8a-929a-4c3a-870a-74970ca7a7ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.744588 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.745011 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80158c8a-929a-4c3a-870a-74970ca7a7ef-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.745080 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.745156 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrk7p\" (UniqueName: \"kubernetes.io/projected/80158c8a-929a-4c3a-870a-74970ca7a7ef-kube-api-access-vrk7p\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.745248 4784 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.767883 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-config-data" (OuterVolumeSpecName: "config-data") pod "80158c8a-929a-4c3a-870a-74970ca7a7ef" (UID: "80158c8a-929a-4c3a-870a-74970ca7a7ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.823932 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"80158c8a-929a-4c3a-870a-74970ca7a7ef","Type":"ContainerDied","Data":"f16c47192ce9b79ea3b786c91ad551587e81713bbeb6bc332daf5ea230b2d1e8"} Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.824477 4784 scope.go:117] "RemoveContainer" containerID="6809ca070c2f70033bcdbcea9c9dcdec5a8b43d3c1e6c1236b35a8b1a70b56ff" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.823986 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.847775 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80158c8a-929a-4c3a-870a-74970ca7a7ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.912848 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:43:03 crc kubenswrapper[4784]: I0123 06:43:03.955734 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.004566 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:43:04 crc kubenswrapper[4784]: E0123 06:43:04.005766 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf03953b-09e0-4872-ba7a-cacf7673f1af" containerName="neutron-api" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.005888 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf03953b-09e0-4872-ba7a-cacf7673f1af" containerName="neutron-api" Jan 23 06:43:04 crc kubenswrapper[4784]: E0123 06:43:04.006009 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80158c8a-929a-4c3a-870a-74970ca7a7ef" containerName="cinder-api" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.006098 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="80158c8a-929a-4c3a-870a-74970ca7a7ef" containerName="cinder-api" Jan 23 06:43:04 crc kubenswrapper[4784]: E0123 06:43:04.006214 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf03953b-09e0-4872-ba7a-cacf7673f1af" containerName="neutron-httpd" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.006305 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf03953b-09e0-4872-ba7a-cacf7673f1af" containerName="neutron-httpd" Jan 23 06:43:04 crc kubenswrapper[4784]: E0123 06:43:04.006419 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80158c8a-929a-4c3a-870a-74970ca7a7ef" containerName="cinder-api-log" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.006504 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="80158c8a-929a-4c3a-870a-74970ca7a7ef" containerName="cinder-api-log" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.006896 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf03953b-09e0-4872-ba7a-cacf7673f1af" containerName="neutron-httpd" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.007027 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="80158c8a-929a-4c3a-870a-74970ca7a7ef" containerName="cinder-api-log" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.007115 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="80158c8a-929a-4c3a-870a-74970ca7a7ef" containerName="cinder-api" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.007209 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf03953b-09e0-4872-ba7a-cacf7673f1af" containerName="neutron-api" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.009073 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.013785 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.014145 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.015193 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.021128 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.078722 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-scripts\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.078854 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.078906 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6pb5\" (UniqueName: \"kubernetes.io/projected/dfb6df04-2d1b-4058-b54e-122d31b83c46-kube-api-access-l6pb5\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.078933 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-config-data-custom\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.078996 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-config-data\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.079017 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-public-tls-certs\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.079085 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfb6df04-2d1b-4058-b54e-122d31b83c46-logs\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.079119 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.079140 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dfb6df04-2d1b-4058-b54e-122d31b83c46-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.181510 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-config-data\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.181610 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-public-tls-certs\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.181727 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfb6df04-2d1b-4058-b54e-122d31b83c46-logs\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.181794 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.181843 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dfb6df04-2d1b-4058-b54e-122d31b83c46-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.181902 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-scripts\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.181968 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.182025 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6pb5\" (UniqueName: \"kubernetes.io/projected/dfb6df04-2d1b-4058-b54e-122d31b83c46-kube-api-access-l6pb5\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.182060 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-config-data-custom\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.189878 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-config-data\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.190820 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-config-data-custom\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.190880 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-public-tls-certs\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.191366 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dfb6df04-2d1b-4058-b54e-122d31b83c46-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.191916 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfb6df04-2d1b-4058-b54e-122d31b83c46-logs\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.196077 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-scripts\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.202473 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.210021 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb6df04-2d1b-4058-b54e-122d31b83c46-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.215523 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6pb5\" (UniqueName: \"kubernetes.io/projected/dfb6df04-2d1b-4058-b54e-122d31b83c46-kube-api-access-l6pb5\") pod \"cinder-api-0\" (UID: \"dfb6df04-2d1b-4058-b54e-122d31b83c46\") " pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.337935 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.771115 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="73417c1c-ce94-42f8-bdcb-6db903adc851" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.771591 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="73417c1c-ce94-42f8-bdcb-6db903adc851" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.779816 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.783276 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 06:43:04 crc kubenswrapper[4784]: I0123 06:43:04.809568 4784 scope.go:117] "RemoveContainer" containerID="fa467917079355ee58c0e4421d873c1c316ec477f0b056722cb52a55a9b61e94" Jan 23 06:43:05 crc kubenswrapper[4784]: I0123 06:43:05.024637 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:43:05 crc kubenswrapper[4784]: I0123 06:43:05.099496 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fpd8v"] Jan 23 06:43:05 crc kubenswrapper[4784]: I0123 06:43:05.100240 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" podUID="239c12d0-5821-4bcc-9b6e-b90a896731cd" containerName="dnsmasq-dns" containerID="cri-o://44aef79274973ca65bd96ff6ae614bde5568a481ed5cb84ef671f163d3f8de58" gracePeriod=10 Jan 23 06:43:05 crc kubenswrapper[4784]: I0123 06:43:05.283960 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80158c8a-929a-4c3a-870a-74970ca7a7ef" path="/var/lib/kubelet/pods/80158c8a-929a-4c3a-870a-74970ca7a7ef/volumes" Jan 23 06:43:05 crc kubenswrapper[4784]: I0123 06:43:05.445987 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:43:05 crc kubenswrapper[4784]: W0123 06:43:05.447155 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfb6df04_2d1b_4058_b54e_122d31b83c46.slice/crio-38613ee6956ef55047fea4d57f42a0cb1d81d092a1c8dfe83346704190481d07 WatchSource:0}: Error finding container 38613ee6956ef55047fea4d57f42a0cb1d81d092a1c8dfe83346704190481d07: Status 404 returned error can't find the container with id 38613ee6956ef55047fea4d57f42a0cb1d81d092a1c8dfe83346704190481d07 Jan 23 06:43:05 crc kubenswrapper[4784]: I0123 06:43:05.890650 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dfb6df04-2d1b-4058-b54e-122d31b83c46","Type":"ContainerStarted","Data":"38613ee6956ef55047fea4d57f42a0cb1d81d092a1c8dfe83346704190481d07"} Jan 23 06:43:07 crc kubenswrapper[4784]: I0123 06:43:07.853477 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55f9fccfc8-b52jv" podUID="a51a41e9-6984-493a-b3af-ecee435cc80f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:43:08 crc kubenswrapper[4784]: I0123 06:43:08.928731 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c","Type":"ContainerStarted","Data":"8bdde3e5eb4cafd08b90cdddb93b764ec123bd977a5c72329f8c94528ca49610"} Jan 23 06:43:08 crc kubenswrapper[4784]: I0123 06:43:08.930703 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dfb6df04-2d1b-4058-b54e-122d31b83c46","Type":"ContainerStarted","Data":"7af67a81de4801c642b5de1199a91755a3cf7015f9e9b3595d4fcc5fb9745b9f"} Jan 23 06:43:08 crc kubenswrapper[4784]: I0123 06:43:08.933037 4784 generic.go:334] "Generic (PLEG): container finished" podID="239c12d0-5821-4bcc-9b6e-b90a896731cd" containerID="44aef79274973ca65bd96ff6ae614bde5568a481ed5cb84ef671f163d3f8de58" exitCode=0 Jan 23 06:43:08 crc kubenswrapper[4784]: I0123 06:43:08.933071 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" event={"ID":"239c12d0-5821-4bcc-9b6e-b90a896731cd","Type":"ContainerDied","Data":"44aef79274973ca65bd96ff6ae614bde5568a481ed5cb84ef671f163d3f8de58"} Jan 23 06:43:09 crc kubenswrapper[4784]: I0123 06:43:09.366213 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5959d8d8f9-nvgzc" Jan 23 06:43:09 crc kubenswrapper[4784]: I0123 06:43:09.543995 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" podUID="239c12d0-5821-4bcc-9b6e-b90a896731cd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: connect: connection refused" Jan 23 06:43:09 crc kubenswrapper[4784]: I0123 06:43:09.647286 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:43:10 crc kubenswrapper[4784]: I0123 06:43:10.135213 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:43:10 crc kubenswrapper[4784]: I0123 06:43:10.155475 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 06:43:10 crc kubenswrapper[4784]: I0123 06:43:10.227059 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:43:10 crc kubenswrapper[4784]: I0123 06:43:10.253366 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:43:10 crc kubenswrapper[4784]: I0123 06:43:10.957691 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="86f8ec13-a652-4f8d-83c9-c278bfbea888" containerName="cinder-scheduler" containerID="cri-o://fe28b99b7850613664106e1c6ac225f747959ef5a0b0c51ebd3bef9d7e7d13b8" gracePeriod=30 Jan 23 06:43:10 crc kubenswrapper[4784]: I0123 06:43:10.957782 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="86f8ec13-a652-4f8d-83c9-c278bfbea888" containerName="probe" containerID="cri-o://8d3c8e08377ab88967aa77a662956ffbc247c4368de0429b06c380998ac44aec" gracePeriod=30 Jan 23 06:43:11 crc kubenswrapper[4784]: I0123 06:43:11.440674 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:43:11 crc kubenswrapper[4784]: I0123 06:43:11.452570 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65d5f4f9bd-jjkgn" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.005870 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dfb6df04-2d1b-4058-b54e-122d31b83c46","Type":"ContainerStarted","Data":"52ab661e77cb2e560ce1a6a3becb870a102d409bd096fd65f9c5911e7166fa5b"} Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.367416 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.438682 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-dns-swift-storage-0\") pod \"239c12d0-5821-4bcc-9b6e-b90a896731cd\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.438783 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v8c6\" (UniqueName: \"kubernetes.io/projected/239c12d0-5821-4bcc-9b6e-b90a896731cd-kube-api-access-6v8c6\") pod \"239c12d0-5821-4bcc-9b6e-b90a896731cd\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.438823 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-ovsdbserver-sb\") pod \"239c12d0-5821-4bcc-9b6e-b90a896731cd\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.438879 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-ovsdbserver-nb\") pod \"239c12d0-5821-4bcc-9b6e-b90a896731cd\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.439020 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-dns-svc\") pod \"239c12d0-5821-4bcc-9b6e-b90a896731cd\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.439049 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-config\") pod \"239c12d0-5821-4bcc-9b6e-b90a896731cd\" (UID: \"239c12d0-5821-4bcc-9b6e-b90a896731cd\") " Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.516201 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/239c12d0-5821-4bcc-9b6e-b90a896731cd-kube-api-access-6v8c6" (OuterVolumeSpecName: "kube-api-access-6v8c6") pod "239c12d0-5821-4bcc-9b6e-b90a896731cd" (UID: "239c12d0-5821-4bcc-9b6e-b90a896731cd"). InnerVolumeSpecName "kube-api-access-6v8c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.565336 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v8c6\" (UniqueName: \"kubernetes.io/projected/239c12d0-5821-4bcc-9b6e-b90a896731cd-kube-api-access-6v8c6\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.567041 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "239c12d0-5821-4bcc-9b6e-b90a896731cd" (UID: "239c12d0-5821-4bcc-9b6e-b90a896731cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.595499 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "239c12d0-5821-4bcc-9b6e-b90a896731cd" (UID: "239c12d0-5821-4bcc-9b6e-b90a896731cd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.622334 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "239c12d0-5821-4bcc-9b6e-b90a896731cd" (UID: "239c12d0-5821-4bcc-9b6e-b90a896731cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.644778 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "239c12d0-5821-4bcc-9b6e-b90a896731cd" (UID: "239c12d0-5821-4bcc-9b6e-b90a896731cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.667226 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.667261 4784 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.667273 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.667282 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.731917 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-65775dd4cd-wxtf2" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.747060 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.823859 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79d47d6854-hfx9p"] Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.824474 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-79d47d6854-hfx9p" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerName="horizon-log" containerID="cri-o://0b025da38950e35051ff144502203a873d0391f48eac9ab72a2003adfd788b87" gracePeriod=30 Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.825019 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-79d47d6854-hfx9p" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerName="horizon" containerID="cri-o://8ceae607bf3d1a305e21df79b6d78c685530e9c5947012ef6b094625790484a4" gracePeriod=30 Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.839945 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-79d47d6854-hfx9p" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.889283 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-config" (OuterVolumeSpecName: "config") pod "239c12d0-5821-4bcc-9b6e-b90a896731cd" (UID: "239c12d0-5821-4bcc-9b6e-b90a896731cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.897170 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/239c12d0-5821-4bcc-9b6e-b90a896731cd-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.967718 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 06:43:12 crc kubenswrapper[4784]: E0123 06:43:12.973438 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239c12d0-5821-4bcc-9b6e-b90a896731cd" containerName="init" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.973487 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="239c12d0-5821-4bcc-9b6e-b90a896731cd" containerName="init" Jan 23 06:43:12 crc kubenswrapper[4784]: E0123 06:43:12.973515 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239c12d0-5821-4bcc-9b6e-b90a896731cd" containerName="dnsmasq-dns" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.973556 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="239c12d0-5821-4bcc-9b6e-b90a896731cd" containerName="dnsmasq-dns" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.977437 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="239c12d0-5821-4bcc-9b6e-b90a896731cd" containerName="dnsmasq-dns" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.978628 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.987273 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.987625 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-bxlbj" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.987833 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.997704 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 06:43:12 crc kubenswrapper[4784]: I0123 06:43:12.999896 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6a24fa8-a5c2-4812-97c2-685330a66205-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b6a24fa8-a5c2-4812-97c2-685330a66205\") " pod="openstack/openstackclient" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.000037 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b6a24fa8-a5c2-4812-97c2-685330a66205-openstack-config\") pod \"openstackclient\" (UID: \"b6a24fa8-a5c2-4812-97c2-685330a66205\") " pod="openstack/openstackclient" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.000079 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b88c57956-78khw" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.000084 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwp5r\" (UniqueName: \"kubernetes.io/projected/b6a24fa8-a5c2-4812-97c2-685330a66205-kube-api-access-dwp5r\") pod \"openstackclient\" (UID: \"b6a24fa8-a5c2-4812-97c2-685330a66205\") " pod="openstack/openstackclient" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.000265 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b6a24fa8-a5c2-4812-97c2-685330a66205-openstack-config-secret\") pod \"openstackclient\" (UID: \"b6a24fa8-a5c2-4812-97c2-685330a66205\") " pod="openstack/openstackclient" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.029198 4784 generic.go:334] "Generic (PLEG): container finished" podID="86f8ec13-a652-4f8d-83c9-c278bfbea888" containerID="8d3c8e08377ab88967aa77a662956ffbc247c4368de0429b06c380998ac44aec" exitCode=0 Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.029281 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86f8ec13-a652-4f8d-83c9-c278bfbea888","Type":"ContainerDied","Data":"8d3c8e08377ab88967aa77a662956ffbc247c4368de0429b06c380998ac44aec"} Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.032202 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.050288 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-fpd8v" event={"ID":"239c12d0-5821-4bcc-9b6e-b90a896731cd","Type":"ContainerDied","Data":"9e01e1e01cf491676ef6855c4589a650f5fddb67f5aaa8ce85a70d83a9af6c22"} Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.050399 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.050435 4784 scope.go:117] "RemoveContainer" containerID="44aef79274973ca65bd96ff6ae614bde5568a481ed5cb84ef671f163d3f8de58" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.107266 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6a24fa8-a5c2-4812-97c2-685330a66205-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b6a24fa8-a5c2-4812-97c2-685330a66205\") " pod="openstack/openstackclient" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.108430 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b6a24fa8-a5c2-4812-97c2-685330a66205-openstack-config\") pod \"openstackclient\" (UID: \"b6a24fa8-a5c2-4812-97c2-685330a66205\") " pod="openstack/openstackclient" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.109823 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwp5r\" (UniqueName: \"kubernetes.io/projected/b6a24fa8-a5c2-4812-97c2-685330a66205-kube-api-access-dwp5r\") pod \"openstackclient\" (UID: \"b6a24fa8-a5c2-4812-97c2-685330a66205\") " pod="openstack/openstackclient" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.113531 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b6a24fa8-a5c2-4812-97c2-685330a66205-openstack-config-secret\") pod \"openstackclient\" (UID: \"b6a24fa8-a5c2-4812-97c2-685330a66205\") " pod="openstack/openstackclient" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.119586 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b6a24fa8-a5c2-4812-97c2-685330a66205-openstack-config\") pod \"openstackclient\" (UID: \"b6a24fa8-a5c2-4812-97c2-685330a66205\") " pod="openstack/openstackclient" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.123676 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6a24fa8-a5c2-4812-97c2-685330a66205-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b6a24fa8-a5c2-4812-97c2-685330a66205\") " pod="openstack/openstackclient" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.138349 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwp5r\" (UniqueName: \"kubernetes.io/projected/b6a24fa8-a5c2-4812-97c2-685330a66205-kube-api-access-dwp5r\") pod \"openstackclient\" (UID: \"b6a24fa8-a5c2-4812-97c2-685330a66205\") " pod="openstack/openstackclient" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.138442 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55f9fccfc8-b52jv"] Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.138848 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55f9fccfc8-b52jv" podUID="a51a41e9-6984-493a-b3af-ecee435cc80f" containerName="barbican-api-log" containerID="cri-o://4f3ee2742778e4210542d63f23262b55548233bfeb377a27e1be31fc301fb85b" gracePeriod=30 Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.139521 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55f9fccfc8-b52jv" podUID="a51a41e9-6984-493a-b3af-ecee435cc80f" containerName="barbican-api" containerID="cri-o://c18706aa3c59bb38a1cf7fe810c8f06528e4adf9db2d859d3fa2ca4cfd2be25f" gracePeriod=30 Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.145143 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=10.145111289 podStartE2EDuration="10.145111289s" podCreationTimestamp="2026-01-23 06:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:43:13.083276756 +0000 UTC m=+1396.315784730" watchObservedRunningTime="2026-01-23 06:43:13.145111289 +0000 UTC m=+1396.377619263" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.149595 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b6a24fa8-a5c2-4812-97c2-685330a66205-openstack-config-secret\") pod \"openstackclient\" (UID: \"b6a24fa8-a5c2-4812-97c2-685330a66205\") " pod="openstack/openstackclient" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.252205 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fpd8v"] Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.284180 4784 scope.go:117] "RemoveContainer" containerID="94a27e3c53af418d70ce1201dea2bf867300c066d56535e95d593c77b04e5d46" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.293494 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-fpd8v"] Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.327436 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 06:43:13 crc kubenswrapper[4784]: I0123 06:43:13.926265 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 06:43:13 crc kubenswrapper[4784]: W0123 06:43:13.929367 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6a24fa8_a5c2_4812_97c2_685330a66205.slice/crio-6b0c61e51992d74e41a1953de043cc4495449ee096e86b5c06abba94d79dd35d WatchSource:0}: Error finding container 6b0c61e51992d74e41a1953de043cc4495449ee096e86b5c06abba94d79dd35d: Status 404 returned error can't find the container with id 6b0c61e51992d74e41a1953de043cc4495449ee096e86b5c06abba94d79dd35d Jan 23 06:43:14 crc kubenswrapper[4784]: I0123 06:43:14.107834 4784 generic.go:334] "Generic (PLEG): container finished" podID="a51a41e9-6984-493a-b3af-ecee435cc80f" containerID="4f3ee2742778e4210542d63f23262b55548233bfeb377a27e1be31fc301fb85b" exitCode=143 Jan 23 06:43:14 crc kubenswrapper[4784]: I0123 06:43:14.107934 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f9fccfc8-b52jv" event={"ID":"a51a41e9-6984-493a-b3af-ecee435cc80f","Type":"ContainerDied","Data":"4f3ee2742778e4210542d63f23262b55548233bfeb377a27e1be31fc301fb85b"} Jan 23 06:43:14 crc kubenswrapper[4784]: I0123 06:43:14.123205 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b6a24fa8-a5c2-4812-97c2-685330a66205","Type":"ContainerStarted","Data":"6b0c61e51992d74e41a1953de043cc4495449ee096e86b5c06abba94d79dd35d"} Jan 23 06:43:15 crc kubenswrapper[4784]: I0123 06:43:15.140882 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c","Type":"ContainerStarted","Data":"e3ea1a66eb30b1e541d10495f26f6ec1593c09ae1d1ba1b4c7b850ff976dac6d"} Jan 23 06:43:15 crc kubenswrapper[4784]: I0123 06:43:15.267618 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="239c12d0-5821-4bcc-9b6e-b90a896731cd" path="/var/lib/kubelet/pods/239c12d0-5821-4bcc-9b6e-b90a896731cd/volumes" Jan 23 06:43:16 crc kubenswrapper[4784]: I0123 06:43:16.178393 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c","Type":"ContainerStarted","Data":"adb8b4ed31b359830c60bfc3f9eb0d3170ff74e4d65c52ecae9e9764c91ab8f4"} Jan 23 06:43:16 crc kubenswrapper[4784]: I0123 06:43:16.186145 4784 generic.go:334] "Generic (PLEG): container finished" podID="86f8ec13-a652-4f8d-83c9-c278bfbea888" containerID="fe28b99b7850613664106e1c6ac225f747959ef5a0b0c51ebd3bef9d7e7d13b8" exitCode=0 Jan 23 06:43:16 crc kubenswrapper[4784]: I0123 06:43:16.186204 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86f8ec13-a652-4f8d-83c9-c278bfbea888","Type":"ContainerDied","Data":"fe28b99b7850613664106e1c6ac225f747959ef5a0b0c51ebd3bef9d7e7d13b8"} Jan 23 06:43:16 crc kubenswrapper[4784]: I0123 06:43:16.872398 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.020952 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-combined-ca-bundle\") pod \"86f8ec13-a652-4f8d-83c9-c278bfbea888\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.022072 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-scripts\") pod \"86f8ec13-a652-4f8d-83c9-c278bfbea888\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.022317 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-config-data\") pod \"86f8ec13-a652-4f8d-83c9-c278bfbea888\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.022412 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-config-data-custom\") pod \"86f8ec13-a652-4f8d-83c9-c278bfbea888\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.023373 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kkqm\" (UniqueName: \"kubernetes.io/projected/86f8ec13-a652-4f8d-83c9-c278bfbea888-kube-api-access-8kkqm\") pod \"86f8ec13-a652-4f8d-83c9-c278bfbea888\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.023574 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86f8ec13-a652-4f8d-83c9-c278bfbea888-etc-machine-id\") pod \"86f8ec13-a652-4f8d-83c9-c278bfbea888\" (UID: \"86f8ec13-a652-4f8d-83c9-c278bfbea888\") " Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.023785 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86f8ec13-a652-4f8d-83c9-c278bfbea888-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "86f8ec13-a652-4f8d-83c9-c278bfbea888" (UID: "86f8ec13-a652-4f8d-83c9-c278bfbea888"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.024484 4784 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86f8ec13-a652-4f8d-83c9-c278bfbea888-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.031199 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-scripts" (OuterVolumeSpecName: "scripts") pod "86f8ec13-a652-4f8d-83c9-c278bfbea888" (UID: "86f8ec13-a652-4f8d-83c9-c278bfbea888"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.032450 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "86f8ec13-a652-4f8d-83c9-c278bfbea888" (UID: "86f8ec13-a652-4f8d-83c9-c278bfbea888"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.032593 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f8ec13-a652-4f8d-83c9-c278bfbea888-kube-api-access-8kkqm" (OuterVolumeSpecName: "kube-api-access-8kkqm") pod "86f8ec13-a652-4f8d-83c9-c278bfbea888" (UID: "86f8ec13-a652-4f8d-83c9-c278bfbea888"). InnerVolumeSpecName "kube-api-access-8kkqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.117130 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86f8ec13-a652-4f8d-83c9-c278bfbea888" (UID: "86f8ec13-a652-4f8d-83c9-c278bfbea888"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.127267 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kkqm\" (UniqueName: \"kubernetes.io/projected/86f8ec13-a652-4f8d-83c9-c278bfbea888-kube-api-access-8kkqm\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.127834 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.127927 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.128021 4784 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.200126 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-config-data" (OuterVolumeSpecName: "config-data") pod "86f8ec13-a652-4f8d-83c9-c278bfbea888" (UID: "86f8ec13-a652-4f8d-83c9-c278bfbea888"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.205608 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86f8ec13-a652-4f8d-83c9-c278bfbea888","Type":"ContainerDied","Data":"52851fb4a6f2253298ec351f6ac83134cf0e2f99d17a2010c26d580999ebd12e"} Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.205690 4784 scope.go:117] "RemoveContainer" containerID="8d3c8e08377ab88967aa77a662956ffbc247c4368de0429b06c380998ac44aec" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.205915 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.232657 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f8ec13-a652-4f8d-83c9-c278bfbea888-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.311669 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.348719 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.348793 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:43:17 crc kubenswrapper[4784]: E0123 06:43:17.349479 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f8ec13-a652-4f8d-83c9-c278bfbea888" containerName="probe" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.349515 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f8ec13-a652-4f8d-83c9-c278bfbea888" containerName="probe" Jan 23 06:43:17 crc kubenswrapper[4784]: E0123 06:43:17.349565 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f8ec13-a652-4f8d-83c9-c278bfbea888" containerName="cinder-scheduler" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.349576 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f8ec13-a652-4f8d-83c9-c278bfbea888" containerName="cinder-scheduler" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.350098 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f8ec13-a652-4f8d-83c9-c278bfbea888" containerName="cinder-scheduler" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.350126 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f8ec13-a652-4f8d-83c9-c278bfbea888" containerName="probe" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.357288 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.358032 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.375120 4784 scope.go:117] "RemoveContainer" containerID="fe28b99b7850613664106e1c6ac225f747959ef5a0b0c51ebd3bef9d7e7d13b8" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.375173 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.544614 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ac961b-d41b-43ef-b55e-07b0cf093e56-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.544668 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ac961b-d41b-43ef-b55e-07b0cf093e56-config-data\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.544784 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87ac961b-d41b-43ef-b55e-07b0cf093e56-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.544816 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87ac961b-d41b-43ef-b55e-07b0cf093e56-scripts\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.544939 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87ac961b-d41b-43ef-b55e-07b0cf093e56-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.544973 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv6ln\" (UniqueName: \"kubernetes.io/projected/87ac961b-d41b-43ef-b55e-07b0cf093e56-kube-api-access-gv6ln\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.584244 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-79d47d6854-hfx9p" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:60782->10.217.0.159:8443: read: connection reset by peer" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.584961 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-79d47d6854-hfx9p" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.647520 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87ac961b-d41b-43ef-b55e-07b0cf093e56-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.647594 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv6ln\" (UniqueName: \"kubernetes.io/projected/87ac961b-d41b-43ef-b55e-07b0cf093e56-kube-api-access-gv6ln\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.647631 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87ac961b-d41b-43ef-b55e-07b0cf093e56-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.647637 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ac961b-d41b-43ef-b55e-07b0cf093e56-config-data\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.647699 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ac961b-d41b-43ef-b55e-07b0cf093e56-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.647803 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87ac961b-d41b-43ef-b55e-07b0cf093e56-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.647836 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87ac961b-d41b-43ef-b55e-07b0cf093e56-scripts\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.654127 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ac961b-d41b-43ef-b55e-07b0cf093e56-config-data\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.661642 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ac961b-d41b-43ef-b55e-07b0cf093e56-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.662442 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87ac961b-d41b-43ef-b55e-07b0cf093e56-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.663236 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87ac961b-d41b-43ef-b55e-07b0cf093e56-scripts\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.675498 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv6ln\" (UniqueName: \"kubernetes.io/projected/87ac961b-d41b-43ef-b55e-07b0cf093e56-kube-api-access-gv6ln\") pod \"cinder-scheduler-0\" (UID: \"87ac961b-d41b-43ef-b55e-07b0cf093e56\") " pod="openstack/cinder-scheduler-0" Jan 23 06:43:17 crc kubenswrapper[4784]: I0123 06:43:17.753222 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.035436 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.165955 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-config-data-custom\") pod \"a51a41e9-6984-493a-b3af-ecee435cc80f\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.166493 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s8vt\" (UniqueName: \"kubernetes.io/projected/a51a41e9-6984-493a-b3af-ecee435cc80f-kube-api-access-4s8vt\") pod \"a51a41e9-6984-493a-b3af-ecee435cc80f\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.166593 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-config-data\") pod \"a51a41e9-6984-493a-b3af-ecee435cc80f\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.166706 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a51a41e9-6984-493a-b3af-ecee435cc80f-logs\") pod \"a51a41e9-6984-493a-b3af-ecee435cc80f\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.166813 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-combined-ca-bundle\") pod \"a51a41e9-6984-493a-b3af-ecee435cc80f\" (UID: \"a51a41e9-6984-493a-b3af-ecee435cc80f\") " Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.168810 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a51a41e9-6984-493a-b3af-ecee435cc80f-logs" (OuterVolumeSpecName: "logs") pod "a51a41e9-6984-493a-b3af-ecee435cc80f" (UID: "a51a41e9-6984-493a-b3af-ecee435cc80f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.191299 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a51a41e9-6984-493a-b3af-ecee435cc80f-kube-api-access-4s8vt" (OuterVolumeSpecName: "kube-api-access-4s8vt") pod "a51a41e9-6984-493a-b3af-ecee435cc80f" (UID: "a51a41e9-6984-493a-b3af-ecee435cc80f"). InnerVolumeSpecName "kube-api-access-4s8vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.207889 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a51a41e9-6984-493a-b3af-ecee435cc80f" (UID: "a51a41e9-6984-493a-b3af-ecee435cc80f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.227446 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a51a41e9-6984-493a-b3af-ecee435cc80f" (UID: "a51a41e9-6984-493a-b3af-ecee435cc80f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.244727 4784 generic.go:334] "Generic (PLEG): container finished" podID="a51a41e9-6984-493a-b3af-ecee435cc80f" containerID="c18706aa3c59bb38a1cf7fe810c8f06528e4adf9db2d859d3fa2ca4cfd2be25f" exitCode=0 Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.244864 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f9fccfc8-b52jv" event={"ID":"a51a41e9-6984-493a-b3af-ecee435cc80f","Type":"ContainerDied","Data":"c18706aa3c59bb38a1cf7fe810c8f06528e4adf9db2d859d3fa2ca4cfd2be25f"} Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.244916 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55f9fccfc8-b52jv" event={"ID":"a51a41e9-6984-493a-b3af-ecee435cc80f","Type":"ContainerDied","Data":"1b8c949707d53d20813248453c82f272ce5f46af4fdd2b4f8aa5a4eeecc61aa8"} Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.244945 4784 scope.go:117] "RemoveContainer" containerID="c18706aa3c59bb38a1cf7fe810c8f06528e4adf9db2d859d3fa2ca4cfd2be25f" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.245158 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55f9fccfc8-b52jv" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.255973 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-config-data" (OuterVolumeSpecName: "config-data") pod "a51a41e9-6984-493a-b3af-ecee435cc80f" (UID: "a51a41e9-6984-493a-b3af-ecee435cc80f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.261124 4784 generic.go:334] "Generic (PLEG): container finished" podID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerID="8ceae607bf3d1a305e21df79b6d78c685530e9c5947012ef6b094625790484a4" exitCode=0 Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.261273 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79d47d6854-hfx9p" event={"ID":"8d31d380-7e87-4ce6-bbfe-5f3788456978","Type":"ContainerDied","Data":"8ceae607bf3d1a305e21df79b6d78c685530e9c5947012ef6b094625790484a4"} Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.266850 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.272578 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.272627 4784 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.272641 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s8vt\" (UniqueName: \"kubernetes.io/projected/a51a41e9-6984-493a-b3af-ecee435cc80f-kube-api-access-4s8vt\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.272658 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a51a41e9-6984-493a-b3af-ecee435cc80f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.272672 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a51a41e9-6984-493a-b3af-ecee435cc80f-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.307705 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.83703342 podStartE2EDuration="18.307662347s" podCreationTimestamp="2026-01-23 06:43:00 +0000 UTC" firstStartedPulling="2026-01-23 06:43:02.029373657 +0000 UTC m=+1385.261881631" lastFinishedPulling="2026-01-23 06:43:17.500002584 +0000 UTC m=+1400.732510558" observedRunningTime="2026-01-23 06:43:18.296712227 +0000 UTC m=+1401.529220211" watchObservedRunningTime="2026-01-23 06:43:18.307662347 +0000 UTC m=+1401.540170331" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.322650 4784 scope.go:117] "RemoveContainer" containerID="4f3ee2742778e4210542d63f23262b55548233bfeb377a27e1be31fc301fb85b" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.371771 4784 scope.go:117] "RemoveContainer" containerID="c18706aa3c59bb38a1cf7fe810c8f06528e4adf9db2d859d3fa2ca4cfd2be25f" Jan 23 06:43:18 crc kubenswrapper[4784]: E0123 06:43:18.381429 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c18706aa3c59bb38a1cf7fe810c8f06528e4adf9db2d859d3fa2ca4cfd2be25f\": container with ID starting with c18706aa3c59bb38a1cf7fe810c8f06528e4adf9db2d859d3fa2ca4cfd2be25f not found: ID does not exist" containerID="c18706aa3c59bb38a1cf7fe810c8f06528e4adf9db2d859d3fa2ca4cfd2be25f" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.381513 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c18706aa3c59bb38a1cf7fe810c8f06528e4adf9db2d859d3fa2ca4cfd2be25f"} err="failed to get container status \"c18706aa3c59bb38a1cf7fe810c8f06528e4adf9db2d859d3fa2ca4cfd2be25f\": rpc error: code = NotFound desc = could not find container \"c18706aa3c59bb38a1cf7fe810c8f06528e4adf9db2d859d3fa2ca4cfd2be25f\": container with ID starting with c18706aa3c59bb38a1cf7fe810c8f06528e4adf9db2d859d3fa2ca4cfd2be25f not found: ID does not exist" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.381563 4784 scope.go:117] "RemoveContainer" containerID="4f3ee2742778e4210542d63f23262b55548233bfeb377a27e1be31fc301fb85b" Jan 23 06:43:18 crc kubenswrapper[4784]: E0123 06:43:18.381979 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f3ee2742778e4210542d63f23262b55548233bfeb377a27e1be31fc301fb85b\": container with ID starting with 4f3ee2742778e4210542d63f23262b55548233bfeb377a27e1be31fc301fb85b not found: ID does not exist" containerID="4f3ee2742778e4210542d63f23262b55548233bfeb377a27e1be31fc301fb85b" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.382035 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f3ee2742778e4210542d63f23262b55548233bfeb377a27e1be31fc301fb85b"} err="failed to get container status \"4f3ee2742778e4210542d63f23262b55548233bfeb377a27e1be31fc301fb85b\": rpc error: code = NotFound desc = could not find container \"4f3ee2742778e4210542d63f23262b55548233bfeb377a27e1be31fc301fb85b\": container with ID starting with 4f3ee2742778e4210542d63f23262b55548233bfeb377a27e1be31fc301fb85b not found: ID does not exist" Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.454014 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.613214 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55f9fccfc8-b52jv"] Jan 23 06:43:18 crc kubenswrapper[4784]: I0123 06:43:18.624378 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-55f9fccfc8-b52jv"] Jan 23 06:43:19 crc kubenswrapper[4784]: I0123 06:43:19.280714 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f8ec13-a652-4f8d-83c9-c278bfbea888" path="/var/lib/kubelet/pods/86f8ec13-a652-4f8d-83c9-c278bfbea888/volumes" Jan 23 06:43:19 crc kubenswrapper[4784]: I0123 06:43:19.281913 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a51a41e9-6984-493a-b3af-ecee435cc80f" path="/var/lib/kubelet/pods/a51a41e9-6984-493a-b3af-ecee435cc80f/volumes" Jan 23 06:43:19 crc kubenswrapper[4784]: I0123 06:43:19.307900 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c","Type":"ContainerStarted","Data":"0e1145dd77d2abc737a2321a017bcb15c4fb74c55c7e95af7845f29de3e39ed1"} Jan 23 06:43:19 crc kubenswrapper[4784]: I0123 06:43:19.311636 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"87ac961b-d41b-43ef-b55e-07b0cf093e56","Type":"ContainerStarted","Data":"4c26a74f3181761409642fe3666b7081a0f8c24e14e09758a50e955302a68033"} Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.334521 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"87ac961b-d41b-43ef-b55e-07b0cf093e56","Type":"ContainerStarted","Data":"dc1648fe152d72b6b402d1cde063c6ff10a8ad784e1dc8b191d3097c38f8fb57"} Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.436144 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-856bb5496c-5hkpt"] Jan 23 06:43:20 crc kubenswrapper[4784]: E0123 06:43:20.436684 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51a41e9-6984-493a-b3af-ecee435cc80f" containerName="barbican-api" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.436698 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51a41e9-6984-493a-b3af-ecee435cc80f" containerName="barbican-api" Jan 23 06:43:20 crc kubenswrapper[4784]: E0123 06:43:20.436720 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51a41e9-6984-493a-b3af-ecee435cc80f" containerName="barbican-api-log" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.436727 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51a41e9-6984-493a-b3af-ecee435cc80f" containerName="barbican-api-log" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.436934 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="a51a41e9-6984-493a-b3af-ecee435cc80f" containerName="barbican-api" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.436964 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="a51a41e9-6984-493a-b3af-ecee435cc80f" containerName="barbican-api-log" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.438106 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.441892 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.447215 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.447250 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.482283 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfac942c-ab7e-42a0-8091-29079fd4da0e-internal-tls-certs\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.482360 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfac942c-ab7e-42a0-8091-29079fd4da0e-run-httpd\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.482387 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfac942c-ab7e-42a0-8091-29079fd4da0e-log-httpd\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.482413 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfac942c-ab7e-42a0-8091-29079fd4da0e-combined-ca-bundle\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.482453 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5rzr\" (UniqueName: \"kubernetes.io/projected/bfac942c-ab7e-42a0-8091-29079fd4da0e-kube-api-access-n5rzr\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.482474 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bfac942c-ab7e-42a0-8091-29079fd4da0e-etc-swift\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.482496 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfac942c-ab7e-42a0-8091-29079fd4da0e-public-tls-certs\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.482515 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfac942c-ab7e-42a0-8091-29079fd4da0e-config-data\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.499841 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-856bb5496c-5hkpt"] Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.584739 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfac942c-ab7e-42a0-8091-29079fd4da0e-internal-tls-certs\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.584916 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfac942c-ab7e-42a0-8091-29079fd4da0e-run-httpd\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.584947 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfac942c-ab7e-42a0-8091-29079fd4da0e-log-httpd\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.584987 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfac942c-ab7e-42a0-8091-29079fd4da0e-combined-ca-bundle\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.585066 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5rzr\" (UniqueName: \"kubernetes.io/projected/bfac942c-ab7e-42a0-8091-29079fd4da0e-kube-api-access-n5rzr\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.585096 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bfac942c-ab7e-42a0-8091-29079fd4da0e-etc-swift\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.585289 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfac942c-ab7e-42a0-8091-29079fd4da0e-public-tls-certs\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.585411 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfac942c-ab7e-42a0-8091-29079fd4da0e-config-data\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.585675 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfac942c-ab7e-42a0-8091-29079fd4da0e-run-httpd\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.585730 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfac942c-ab7e-42a0-8091-29079fd4da0e-log-httpd\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.594443 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bfac942c-ab7e-42a0-8091-29079fd4da0e-etc-swift\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.595078 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfac942c-ab7e-42a0-8091-29079fd4da0e-internal-tls-certs\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.602023 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfac942c-ab7e-42a0-8091-29079fd4da0e-config-data\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.603152 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfac942c-ab7e-42a0-8091-29079fd4da0e-combined-ca-bundle\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.608332 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5rzr\" (UniqueName: \"kubernetes.io/projected/bfac942c-ab7e-42a0-8091-29079fd4da0e-kube-api-access-n5rzr\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.618826 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfac942c-ab7e-42a0-8091-29079fd4da0e-public-tls-certs\") pod \"swift-proxy-856bb5496c-5hkpt\" (UID: \"bfac942c-ab7e-42a0-8091-29079fd4da0e\") " pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:20 crc kubenswrapper[4784]: I0123 06:43:20.761112 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:21 crc kubenswrapper[4784]: I0123 06:43:21.359860 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"87ac961b-d41b-43ef-b55e-07b0cf093e56","Type":"ContainerStarted","Data":"5d124ca3d623fc34c292cdcaa5650726186f175d218e0de036668ff4aeb6975d"} Jan 23 06:43:21 crc kubenswrapper[4784]: I0123 06:43:21.397335 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.397301736 podStartE2EDuration="4.397301736s" podCreationTimestamp="2026-01-23 06:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:43:21.388858388 +0000 UTC m=+1404.621366362" watchObservedRunningTime="2026-01-23 06:43:21.397301736 +0000 UTC m=+1404.629809710" Jan 23 06:43:21 crc kubenswrapper[4784]: I0123 06:43:21.534194 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-856bb5496c-5hkpt"] Jan 23 06:43:21 crc kubenswrapper[4784]: W0123 06:43:21.564823 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfac942c_ab7e_42a0_8091_29079fd4da0e.slice/crio-efeb96845f2e7aba801ef7e93195510e72f6b1f4ffad48a8cc8508814d876d42 WatchSource:0}: Error finding container efeb96845f2e7aba801ef7e93195510e72f6b1f4ffad48a8cc8508814d876d42: Status 404 returned error can't find the container with id efeb96845f2e7aba801ef7e93195510e72f6b1f4ffad48a8cc8508814d876d42 Jan 23 06:43:22 crc kubenswrapper[4784]: I0123 06:43:22.375764 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-856bb5496c-5hkpt" event={"ID":"bfac942c-ab7e-42a0-8091-29079fd4da0e","Type":"ContainerStarted","Data":"efeb96845f2e7aba801ef7e93195510e72f6b1f4ffad48a8cc8508814d876d42"} Jan 23 06:43:22 crc kubenswrapper[4784]: I0123 06:43:22.755069 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 06:43:22 crc kubenswrapper[4784]: I0123 06:43:22.817955 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55f9fccfc8-b52jv" podUID="a51a41e9-6984-493a-b3af-ecee435cc80f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:43:22 crc kubenswrapper[4784]: I0123 06:43:22.817981 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55f9fccfc8-b52jv" podUID="a51a41e9-6984-493a-b3af-ecee435cc80f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.176:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:43:22 crc kubenswrapper[4784]: I0123 06:43:22.997591 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:43:23 crc kubenswrapper[4784]: I0123 06:43:23.010836 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="ceilometer-central-agent" containerID="cri-o://8bdde3e5eb4cafd08b90cdddb93b764ec123bd977a5c72329f8c94528ca49610" gracePeriod=30 Jan 23 06:43:23 crc kubenswrapper[4784]: I0123 06:43:23.011512 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="proxy-httpd" containerID="cri-o://0e1145dd77d2abc737a2321a017bcb15c4fb74c55c7e95af7845f29de3e39ed1" gracePeriod=30 Jan 23 06:43:23 crc kubenswrapper[4784]: I0123 06:43:23.011584 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="sg-core" containerID="cri-o://adb8b4ed31b359830c60bfc3f9eb0d3170ff74e4d65c52ecae9e9764c91ab8f4" gracePeriod=30 Jan 23 06:43:23 crc kubenswrapper[4784]: I0123 06:43:23.011630 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="ceilometer-notification-agent" containerID="cri-o://e3ea1a66eb30b1e541d10495f26f6ec1593c09ae1d1ba1b4c7b850ff976dac6d" gracePeriod=30 Jan 23 06:43:23 crc kubenswrapper[4784]: I0123 06:43:23.283486 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 06:43:23 crc kubenswrapper[4784]: E0123 06:43:23.375723 4784 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cd97396_21eb_4ada_b5d8_c5f6a7abf46c.slice/crio-adb8b4ed31b359830c60bfc3f9eb0d3170ff74e4d65c52ecae9e9764c91ab8f4.scope\": RecentStats: unable to find data in memory cache]" Jan 23 06:43:23 crc kubenswrapper[4784]: I0123 06:43:23.451899 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-856bb5496c-5hkpt" event={"ID":"bfac942c-ab7e-42a0-8091-29079fd4da0e","Type":"ContainerStarted","Data":"277520bbd0a27584d95daade313a362c3a9dd6f8a18fdb8f4e9d900e2bf52efb"} Jan 23 06:43:23 crc kubenswrapper[4784]: I0123 06:43:23.456197 4784 generic.go:334] "Generic (PLEG): container finished" podID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerID="adb8b4ed31b359830c60bfc3f9eb0d3170ff74e4d65c52ecae9e9764c91ab8f4" exitCode=2 Jan 23 06:43:23 crc kubenswrapper[4784]: I0123 06:43:23.456557 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c","Type":"ContainerDied","Data":"adb8b4ed31b359830c60bfc3f9eb0d3170ff74e4d65c52ecae9e9764c91ab8f4"} Jan 23 06:43:23 crc kubenswrapper[4784]: I0123 06:43:23.603710 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:43:23 crc kubenswrapper[4784]: I0123 06:43:23.604295 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:43:23 crc kubenswrapper[4784]: I0123 06:43:23.604376 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:43:23 crc kubenswrapper[4784]: I0123 06:43:23.606835 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99f5c7da473bb191e287690718f667aa1ba0bc87b545db802bd06bfff3e98701"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:43:23 crc kubenswrapper[4784]: I0123 06:43:23.606986 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://99f5c7da473bb191e287690718f667aa1ba0bc87b545db802bd06bfff3e98701" gracePeriod=600 Jan 23 06:43:24 crc kubenswrapper[4784]: I0123 06:43:24.477085 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="99f5c7da473bb191e287690718f667aa1ba0bc87b545db802bd06bfff3e98701" exitCode=0 Jan 23 06:43:24 crc kubenswrapper[4784]: I0123 06:43:24.477194 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"99f5c7da473bb191e287690718f667aa1ba0bc87b545db802bd06bfff3e98701"} Jan 23 06:43:24 crc kubenswrapper[4784]: I0123 06:43:24.477247 4784 scope.go:117] "RemoveContainer" containerID="7d73b98a0e27924b52323e09dc829b98e1ffba0a17575fb7657392d46f6773c1" Jan 23 06:43:24 crc kubenswrapper[4784]: I0123 06:43:24.482098 4784 generic.go:334] "Generic (PLEG): container finished" podID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerID="0e1145dd77d2abc737a2321a017bcb15c4fb74c55c7e95af7845f29de3e39ed1" exitCode=0 Jan 23 06:43:24 crc kubenswrapper[4784]: I0123 06:43:24.482142 4784 generic.go:334] "Generic (PLEG): container finished" podID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerID="e3ea1a66eb30b1e541d10495f26f6ec1593c09ae1d1ba1b4c7b850ff976dac6d" exitCode=0 Jan 23 06:43:24 crc kubenswrapper[4784]: I0123 06:43:24.482151 4784 generic.go:334] "Generic (PLEG): container finished" podID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerID="8bdde3e5eb4cafd08b90cdddb93b764ec123bd977a5c72329f8c94528ca49610" exitCode=0 Jan 23 06:43:24 crc kubenswrapper[4784]: I0123 06:43:24.482177 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c","Type":"ContainerDied","Data":"0e1145dd77d2abc737a2321a017bcb15c4fb74c55c7e95af7845f29de3e39ed1"} Jan 23 06:43:24 crc kubenswrapper[4784]: I0123 06:43:24.482229 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c","Type":"ContainerDied","Data":"e3ea1a66eb30b1e541d10495f26f6ec1593c09ae1d1ba1b4c7b850ff976dac6d"} Jan 23 06:43:24 crc kubenswrapper[4784]: I0123 06:43:24.482406 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c","Type":"ContainerDied","Data":"8bdde3e5eb4cafd08b90cdddb93b764ec123bd977a5c72329f8c94528ca49610"} Jan 23 06:43:24 crc kubenswrapper[4784]: I0123 06:43:24.485173 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-856bb5496c-5hkpt" event={"ID":"bfac942c-ab7e-42a0-8091-29079fd4da0e","Type":"ContainerStarted","Data":"028f618fd4042054abf7e6e42840482f3fb422ba1e80d785e2430ca0cf4d11be"} Jan 23 06:43:24 crc kubenswrapper[4784]: I0123 06:43:24.486818 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:24 crc kubenswrapper[4784]: I0123 06:43:24.486899 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:24 crc kubenswrapper[4784]: I0123 06:43:24.523377 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-856bb5496c-5hkpt" podStartSLOduration=4.523346951 podStartE2EDuration="4.523346951s" podCreationTimestamp="2026-01-23 06:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:43:24.509109781 +0000 UTC m=+1407.741617775" watchObservedRunningTime="2026-01-23 06:43:24.523346951 +0000 UTC m=+1407.755854925" Jan 23 06:43:25 crc kubenswrapper[4784]: I0123 06:43:25.713081 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c68c7795c-7p5x6" Jan 23 06:43:25 crc kubenswrapper[4784]: I0123 06:43:25.793825 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c4745df56-9q499"] Jan 23 06:43:25 crc kubenswrapper[4784]: I0123 06:43:25.794447 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c4745df56-9q499" podUID="0704d33b-825f-40fb-8c88-5fbb26b6994e" containerName="neutron-api" containerID="cri-o://2d0d1b7154e7737507815fbb0e58728c0238fd1976942a0f94a2fa64801d429b" gracePeriod=30 Jan 23 06:43:25 crc kubenswrapper[4784]: I0123 06:43:25.794923 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c4745df56-9q499" podUID="0704d33b-825f-40fb-8c88-5fbb26b6994e" containerName="neutron-httpd" containerID="cri-o://fa0ccc3355232bbb89fd52681782ad95c16df66ce4cb713b92a2303a88844c67" gracePeriod=30 Jan 23 06:43:26 crc kubenswrapper[4784]: I0123 06:43:26.443287 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-79d47d6854-hfx9p" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 23 06:43:26 crc kubenswrapper[4784]: I0123 06:43:26.514843 4784 generic.go:334] "Generic (PLEG): container finished" podID="0704d33b-825f-40fb-8c88-5fbb26b6994e" containerID="fa0ccc3355232bbb89fd52681782ad95c16df66ce4cb713b92a2303a88844c67" exitCode=0 Jan 23 06:43:26 crc kubenswrapper[4784]: I0123 06:43:26.514919 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c4745df56-9q499" event={"ID":"0704d33b-825f-40fb-8c88-5fbb26b6994e","Type":"ContainerDied","Data":"fa0ccc3355232bbb89fd52681782ad95c16df66ce4cb713b92a2303a88844c67"} Jan 23 06:43:28 crc kubenswrapper[4784]: I0123 06:43:28.145614 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 06:43:30 crc kubenswrapper[4784]: I0123 06:43:30.773422 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:30 crc kubenswrapper[4784]: I0123 06:43:30.774641 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-856bb5496c-5hkpt" Jan 23 06:43:31 crc kubenswrapper[4784]: I0123 06:43:31.162340 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.182:3000/\": dial tcp 10.217.0.182:3000: connect: connection refused" Jan 23 06:43:32 crc kubenswrapper[4784]: I0123 06:43:32.633224 4784 generic.go:334] "Generic (PLEG): container finished" podID="0704d33b-825f-40fb-8c88-5fbb26b6994e" containerID="2d0d1b7154e7737507815fbb0e58728c0238fd1976942a0f94a2fa64801d429b" exitCode=0 Jan 23 06:43:32 crc kubenswrapper[4784]: I0123 06:43:32.633320 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c4745df56-9q499" event={"ID":"0704d33b-825f-40fb-8c88-5fbb26b6994e","Type":"ContainerDied","Data":"2d0d1b7154e7737507815fbb0e58728c0238fd1976942a0f94a2fa64801d429b"} Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.486465 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.652815 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jgbz\" (UniqueName: \"kubernetes.io/projected/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-kube-api-access-9jgbz\") pod \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.652979 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-sg-core-conf-yaml\") pod \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.653067 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-scripts\") pod \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.653103 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-log-httpd\") pod \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.653131 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-run-httpd\") pod \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.653241 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-combined-ca-bundle\") pod \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.653295 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-config-data\") pod \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\" (UID: \"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c\") " Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.655778 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" (UID: "6cd97396-21eb-4ada-b5d8-c5f6a7abf46c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.657302 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" (UID: "6cd97396-21eb-4ada-b5d8-c5f6a7abf46c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.662446 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-scripts" (OuterVolumeSpecName: "scripts") pod "6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" (UID: "6cd97396-21eb-4ada-b5d8-c5f6a7abf46c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.664591 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-kube-api-access-9jgbz" (OuterVolumeSpecName: "kube-api-access-9jgbz") pod "6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" (UID: "6cd97396-21eb-4ada-b5d8-c5f6a7abf46c"). InnerVolumeSpecName "kube-api-access-9jgbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.665558 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49"} Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.700340 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cd97396-21eb-4ada-b5d8-c5f6a7abf46c","Type":"ContainerDied","Data":"f9e16b99114a8e4546677a2001edcea86773b0cc44d09f8ceb927e314a408dac"} Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.700415 4784 scope.go:117] "RemoveContainer" containerID="0e1145dd77d2abc737a2321a017bcb15c4fb74c55c7e95af7845f29de3e39ed1" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.700558 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.708290 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b6a24fa8-a5c2-4812-97c2-685330a66205","Type":"ContainerStarted","Data":"c1820886ba8bcc56ee78e8f838bd69e3aeb22f7eb84caae1ea71a26649d2e3c2"} Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.756781 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jgbz\" (UniqueName: \"kubernetes.io/projected/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-kube-api-access-9jgbz\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.756825 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.756839 4784 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.756852 4784 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.761262 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.706102207 podStartE2EDuration="21.761242476s" podCreationTimestamp="2026-01-23 06:43:12 +0000 UTC" firstStartedPulling="2026-01-23 06:43:13.990026428 +0000 UTC m=+1397.222534402" lastFinishedPulling="2026-01-23 06:43:33.045166707 +0000 UTC m=+1416.277674671" observedRunningTime="2026-01-23 06:43:33.744144895 +0000 UTC m=+1416.976652899" watchObservedRunningTime="2026-01-23 06:43:33.761242476 +0000 UTC m=+1416.993750450" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.774517 4784 scope.go:117] "RemoveContainer" containerID="adb8b4ed31b359830c60bfc3f9eb0d3170ff74e4d65c52ecae9e9764c91ab8f4" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.784703 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.802065 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" (UID: "6cd97396-21eb-4ada-b5d8-c5f6a7abf46c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.860877 4784 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.882540 4784 scope.go:117] "RemoveContainer" containerID="e3ea1a66eb30b1e541d10495f26f6ec1593c09ae1d1ba1b4c7b850ff976dac6d" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.906836 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-config-data" (OuterVolumeSpecName: "config-data") pod "6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" (UID: "6cd97396-21eb-4ada-b5d8-c5f6a7abf46c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.943854 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" (UID: "6cd97396-21eb-4ada-b5d8-c5f6a7abf46c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.962531 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcgp6\" (UniqueName: \"kubernetes.io/projected/0704d33b-825f-40fb-8c88-5fbb26b6994e-kube-api-access-dcgp6\") pod \"0704d33b-825f-40fb-8c88-5fbb26b6994e\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.962847 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-combined-ca-bundle\") pod \"0704d33b-825f-40fb-8c88-5fbb26b6994e\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.962869 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-ovndb-tls-certs\") pod \"0704d33b-825f-40fb-8c88-5fbb26b6994e\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.962939 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-httpd-config\") pod \"0704d33b-825f-40fb-8c88-5fbb26b6994e\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.962972 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-config\") pod \"0704d33b-825f-40fb-8c88-5fbb26b6994e\" (UID: \"0704d33b-825f-40fb-8c88-5fbb26b6994e\") " Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.963382 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.963404 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.993095 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0704d33b-825f-40fb-8c88-5fbb26b6994e-kube-api-access-dcgp6" (OuterVolumeSpecName: "kube-api-access-dcgp6") pod "0704d33b-825f-40fb-8c88-5fbb26b6994e" (UID: "0704d33b-825f-40fb-8c88-5fbb26b6994e"). InnerVolumeSpecName "kube-api-access-dcgp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:33 crc kubenswrapper[4784]: I0123 06:43:33.998442 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0704d33b-825f-40fb-8c88-5fbb26b6994e" (UID: "0704d33b-825f-40fb-8c88-5fbb26b6994e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.075766 4784 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.075812 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcgp6\" (UniqueName: \"kubernetes.io/projected/0704d33b-825f-40fb-8c88-5fbb26b6994e-kube-api-access-dcgp6\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.076992 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-config" (OuterVolumeSpecName: "config") pod "0704d33b-825f-40fb-8c88-5fbb26b6994e" (UID: "0704d33b-825f-40fb-8c88-5fbb26b6994e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.102133 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0704d33b-825f-40fb-8c88-5fbb26b6994e" (UID: "0704d33b-825f-40fb-8c88-5fbb26b6994e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.130928 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0704d33b-825f-40fb-8c88-5fbb26b6994e" (UID: "0704d33b-825f-40fb-8c88-5fbb26b6994e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.178955 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.179567 4784 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.179586 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0704d33b-825f-40fb-8c88-5fbb26b6994e-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.202416 4784 scope.go:117] "RemoveContainer" containerID="8bdde3e5eb4cafd08b90cdddb93b764ec123bd977a5c72329f8c94528ca49610" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.231763 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.251522 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.263921 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:43:34 crc kubenswrapper[4784]: E0123 06:43:34.264459 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="ceilometer-notification-agent" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.264485 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="ceilometer-notification-agent" Jan 23 06:43:34 crc kubenswrapper[4784]: E0123 06:43:34.264501 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0704d33b-825f-40fb-8c88-5fbb26b6994e" containerName="neutron-httpd" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.264509 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0704d33b-825f-40fb-8c88-5fbb26b6994e" containerName="neutron-httpd" Jan 23 06:43:34 crc kubenswrapper[4784]: E0123 06:43:34.264527 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="sg-core" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.264534 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="sg-core" Jan 23 06:43:34 crc kubenswrapper[4784]: E0123 06:43:34.264557 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="proxy-httpd" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.264563 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="proxy-httpd" Jan 23 06:43:34 crc kubenswrapper[4784]: E0123 06:43:34.264580 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="ceilometer-central-agent" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.264587 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="ceilometer-central-agent" Jan 23 06:43:34 crc kubenswrapper[4784]: E0123 06:43:34.264612 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0704d33b-825f-40fb-8c88-5fbb26b6994e" containerName="neutron-api" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.264618 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0704d33b-825f-40fb-8c88-5fbb26b6994e" containerName="neutron-api" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.264860 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0704d33b-825f-40fb-8c88-5fbb26b6994e" containerName="neutron-httpd" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.264874 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0704d33b-825f-40fb-8c88-5fbb26b6994e" containerName="neutron-api" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.264974 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="ceilometer-notification-agent" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.265000 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="proxy-httpd" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.265012 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="sg-core" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.265027 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" containerName="ceilometer-central-agent" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.286187 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.289967 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.290513 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.314313 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.383468 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q9pp\" (UniqueName: \"kubernetes.io/projected/03014e29-2486-4bde-9c21-8f7b8dac7b3c-kube-api-access-8q9pp\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.383546 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-config-data\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.383572 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03014e29-2486-4bde-9c21-8f7b8dac7b3c-run-httpd\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.383589 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.383617 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03014e29-2486-4bde-9c21-8f7b8dac7b3c-log-httpd\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.383661 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.383691 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-scripts\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.486261 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q9pp\" (UniqueName: \"kubernetes.io/projected/03014e29-2486-4bde-9c21-8f7b8dac7b3c-kube-api-access-8q9pp\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.486331 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-config-data\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.486360 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03014e29-2486-4bde-9c21-8f7b8dac7b3c-run-httpd\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.486380 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.486408 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03014e29-2486-4bde-9c21-8f7b8dac7b3c-log-httpd\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.486454 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.486488 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-scripts\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.487787 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03014e29-2486-4bde-9c21-8f7b8dac7b3c-run-httpd\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.487818 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03014e29-2486-4bde-9c21-8f7b8dac7b3c-log-httpd\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.495987 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.496078 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-scripts\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.496561 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.497462 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-config-data\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.505198 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q9pp\" (UniqueName: \"kubernetes.io/projected/03014e29-2486-4bde-9c21-8f7b8dac7b3c-kube-api-access-8q9pp\") pod \"ceilometer-0\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.627889 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.738370 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c4745df56-9q499" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.738406 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c4745df56-9q499" event={"ID":"0704d33b-825f-40fb-8c88-5fbb26b6994e","Type":"ContainerDied","Data":"dcf10b67794268be8e8f554ab6892003466d4ad466299d10bcb8d89252b39eba"} Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.739113 4784 scope.go:117] "RemoveContainer" containerID="fa0ccc3355232bbb89fd52681782ad95c16df66ce4cb713b92a2303a88844c67" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.789952 4784 scope.go:117] "RemoveContainer" containerID="2d0d1b7154e7737507815fbb0e58728c0238fd1976942a0f94a2fa64801d429b" Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.821108 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c4745df56-9q499"] Jan 23 06:43:34 crc kubenswrapper[4784]: I0123 06:43:34.838200 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7c4745df56-9q499"] Jan 23 06:43:35 crc kubenswrapper[4784]: I0123 06:43:35.265636 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0704d33b-825f-40fb-8c88-5fbb26b6994e" path="/var/lib/kubelet/pods/0704d33b-825f-40fb-8c88-5fbb26b6994e/volumes" Jan 23 06:43:35 crc kubenswrapper[4784]: I0123 06:43:35.266703 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cd97396-21eb-4ada-b5d8-c5f6a7abf46c" path="/var/lib/kubelet/pods/6cd97396-21eb-4ada-b5d8-c5f6a7abf46c/volumes" Jan 23 06:43:35 crc kubenswrapper[4784]: I0123 06:43:35.267482 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:43:35 crc kubenswrapper[4784]: W0123 06:43:35.273048 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03014e29_2486_4bde_9c21_8f7b8dac7b3c.slice/crio-ef09e160fae4d65a6d76954f3ae1ea773c36802d7f284b39cfad95f132f6fc37 WatchSource:0}: Error finding container ef09e160fae4d65a6d76954f3ae1ea773c36802d7f284b39cfad95f132f6fc37: Status 404 returned error can't find the container with id ef09e160fae4d65a6d76954f3ae1ea773c36802d7f284b39cfad95f132f6fc37 Jan 23 06:43:35 crc kubenswrapper[4784]: I0123 06:43:35.767349 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03014e29-2486-4bde-9c21-8f7b8dac7b3c","Type":"ContainerStarted","Data":"ef09e160fae4d65a6d76954f3ae1ea773c36802d7f284b39cfad95f132f6fc37"} Jan 23 06:43:35 crc kubenswrapper[4784]: I0123 06:43:35.977298 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-z6kmb"] Jan 23 06:43:35 crc kubenswrapper[4784]: I0123 06:43:35.979171 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-z6kmb" Jan 23 06:43:35 crc kubenswrapper[4784]: I0123 06:43:35.997316 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-z6kmb"] Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.080787 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-q7h5p"] Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.082826 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-q7h5p" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.093996 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-q7h5p"] Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.120991 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pwtt\" (UniqueName: \"kubernetes.io/projected/8576e5ec-00dc-45b9-93b2-b76f32e3e92d-kube-api-access-5pwtt\") pod \"nova-api-db-create-z6kmb\" (UID: \"8576e5ec-00dc-45b9-93b2-b76f32e3e92d\") " pod="openstack/nova-api-db-create-z6kmb" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.121066 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8576e5ec-00dc-45b9-93b2-b76f32e3e92d-operator-scripts\") pod \"nova-api-db-create-z6kmb\" (UID: \"8576e5ec-00dc-45b9-93b2-b76f32e3e92d\") " pod="openstack/nova-api-db-create-z6kmb" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.196680 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b75f-account-create-update-nlbg8"] Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.198277 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b75f-account-create-update-nlbg8" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.204790 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.232477 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b75f-account-create-update-nlbg8"] Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.233349 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkrdl\" (UniqueName: \"kubernetes.io/projected/ea803a89-1983-44be-bf13-ac41e92eec7e-kube-api-access-rkrdl\") pod \"nova-cell0-db-create-q7h5p\" (UID: \"ea803a89-1983-44be-bf13-ac41e92eec7e\") " pod="openstack/nova-cell0-db-create-q7h5p" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.233585 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pwtt\" (UniqueName: \"kubernetes.io/projected/8576e5ec-00dc-45b9-93b2-b76f32e3e92d-kube-api-access-5pwtt\") pod \"nova-api-db-create-z6kmb\" (UID: \"8576e5ec-00dc-45b9-93b2-b76f32e3e92d\") " pod="openstack/nova-api-db-create-z6kmb" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.233702 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8576e5ec-00dc-45b9-93b2-b76f32e3e92d-operator-scripts\") pod \"nova-api-db-create-z6kmb\" (UID: \"8576e5ec-00dc-45b9-93b2-b76f32e3e92d\") " pod="openstack/nova-api-db-create-z6kmb" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.233937 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea803a89-1983-44be-bf13-ac41e92eec7e-operator-scripts\") pod \"nova-cell0-db-create-q7h5p\" (UID: \"ea803a89-1983-44be-bf13-ac41e92eec7e\") " pod="openstack/nova-cell0-db-create-q7h5p" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.235414 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8576e5ec-00dc-45b9-93b2-b76f32e3e92d-operator-scripts\") pod \"nova-api-db-create-z6kmb\" (UID: \"8576e5ec-00dc-45b9-93b2-b76f32e3e92d\") " pod="openstack/nova-api-db-create-z6kmb" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.278968 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pwtt\" (UniqueName: \"kubernetes.io/projected/8576e5ec-00dc-45b9-93b2-b76f32e3e92d-kube-api-access-5pwtt\") pod \"nova-api-db-create-z6kmb\" (UID: \"8576e5ec-00dc-45b9-93b2-b76f32e3e92d\") " pod="openstack/nova-api-db-create-z6kmb" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.296623 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-z6kmb" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.308695 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.309005 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3565a005-cf5e-43c0-ab31-59071dc6fb9c" containerName="glance-log" containerID="cri-o://0113bed312592c480c7f7b5dc6a9466b8552bc8d763951f25f640709e4cf3757" gracePeriod=30 Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.309626 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3565a005-cf5e-43c0-ab31-59071dc6fb9c" containerName="glance-httpd" containerID="cri-o://f7234ac9ecffe753b48da6d5bfd59256741840b45b7b8c635986505e2bff1cbf" gracePeriod=30 Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.319818 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-hklgl"] Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.325542 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hklgl" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.337822 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c060372-b812-4f94-90c1-b87a4a20c12e-operator-scripts\") pod \"nova-api-b75f-account-create-update-nlbg8\" (UID: \"1c060372-b812-4f94-90c1-b87a4a20c12e\") " pod="openstack/nova-api-b75f-account-create-update-nlbg8" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.337939 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j5pv\" (UniqueName: \"kubernetes.io/projected/1c060372-b812-4f94-90c1-b87a4a20c12e-kube-api-access-8j5pv\") pod \"nova-api-b75f-account-create-update-nlbg8\" (UID: \"1c060372-b812-4f94-90c1-b87a4a20c12e\") " pod="openstack/nova-api-b75f-account-create-update-nlbg8" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.337996 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea803a89-1983-44be-bf13-ac41e92eec7e-operator-scripts\") pod \"nova-cell0-db-create-q7h5p\" (UID: \"ea803a89-1983-44be-bf13-ac41e92eec7e\") " pod="openstack/nova-cell0-db-create-q7h5p" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.338042 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkrdl\" (UniqueName: \"kubernetes.io/projected/ea803a89-1983-44be-bf13-ac41e92eec7e-kube-api-access-rkrdl\") pod \"nova-cell0-db-create-q7h5p\" (UID: \"ea803a89-1983-44be-bf13-ac41e92eec7e\") " pod="openstack/nova-cell0-db-create-q7h5p" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.352322 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea803a89-1983-44be-bf13-ac41e92eec7e-operator-scripts\") pod \"nova-cell0-db-create-q7h5p\" (UID: \"ea803a89-1983-44be-bf13-ac41e92eec7e\") " pod="openstack/nova-cell0-db-create-q7h5p" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.371310 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hklgl"] Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.443412 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-79d47d6854-hfx9p" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.443455 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c060372-b812-4f94-90c1-b87a4a20c12e-operator-scripts\") pod \"nova-api-b75f-account-create-update-nlbg8\" (UID: \"1c060372-b812-4f94-90c1-b87a4a20c12e\") " pod="openstack/nova-api-b75f-account-create-update-nlbg8" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.443520 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/384a5279-9005-4fd7-882e-e14349adfe06-operator-scripts\") pod \"nova-cell1-db-create-hklgl\" (UID: \"384a5279-9005-4fd7-882e-e14349adfe06\") " pod="openstack/nova-cell1-db-create-hklgl" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.443589 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z584k\" (UniqueName: \"kubernetes.io/projected/384a5279-9005-4fd7-882e-e14349adfe06-kube-api-access-z584k\") pod \"nova-cell1-db-create-hklgl\" (UID: \"384a5279-9005-4fd7-882e-e14349adfe06\") " pod="openstack/nova-cell1-db-create-hklgl" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.443660 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j5pv\" (UniqueName: \"kubernetes.io/projected/1c060372-b812-4f94-90c1-b87a4a20c12e-kube-api-access-8j5pv\") pod \"nova-api-b75f-account-create-update-nlbg8\" (UID: \"1c060372-b812-4f94-90c1-b87a4a20c12e\") " pod="openstack/nova-api-b75f-account-create-update-nlbg8" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.445398 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c060372-b812-4f94-90c1-b87a4a20c12e-operator-scripts\") pod \"nova-api-b75f-account-create-update-nlbg8\" (UID: \"1c060372-b812-4f94-90c1-b87a4a20c12e\") " pod="openstack/nova-api-b75f-account-create-update-nlbg8" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.469454 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkrdl\" (UniqueName: \"kubernetes.io/projected/ea803a89-1983-44be-bf13-ac41e92eec7e-kube-api-access-rkrdl\") pod \"nova-cell0-db-create-q7h5p\" (UID: \"ea803a89-1983-44be-bf13-ac41e92eec7e\") " pod="openstack/nova-cell0-db-create-q7h5p" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.504915 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1458-account-create-update-v72tc"] Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.507544 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1458-account-create-update-v72tc" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.510540 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j5pv\" (UniqueName: \"kubernetes.io/projected/1c060372-b812-4f94-90c1-b87a4a20c12e-kube-api-access-8j5pv\") pod \"nova-api-b75f-account-create-update-nlbg8\" (UID: \"1c060372-b812-4f94-90c1-b87a4a20c12e\") " pod="openstack/nova-api-b75f-account-create-update-nlbg8" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.524523 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.535429 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b75f-account-create-update-nlbg8" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.541034 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1458-account-create-update-v72tc"] Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.546817 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/384a5279-9005-4fd7-882e-e14349adfe06-operator-scripts\") pod \"nova-cell1-db-create-hklgl\" (UID: \"384a5279-9005-4fd7-882e-e14349adfe06\") " pod="openstack/nova-cell1-db-create-hklgl" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.546993 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z584k\" (UniqueName: \"kubernetes.io/projected/384a5279-9005-4fd7-882e-e14349adfe06-kube-api-access-z584k\") pod \"nova-cell1-db-create-hklgl\" (UID: \"384a5279-9005-4fd7-882e-e14349adfe06\") " pod="openstack/nova-cell1-db-create-hklgl" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.548138 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/384a5279-9005-4fd7-882e-e14349adfe06-operator-scripts\") pod \"nova-cell1-db-create-hklgl\" (UID: \"384a5279-9005-4fd7-882e-e14349adfe06\") " pod="openstack/nova-cell1-db-create-hklgl" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.634536 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z584k\" (UniqueName: \"kubernetes.io/projected/384a5279-9005-4fd7-882e-e14349adfe06-kube-api-access-z584k\") pod \"nova-cell1-db-create-hklgl\" (UID: \"384a5279-9005-4fd7-882e-e14349adfe06\") " pod="openstack/nova-cell1-db-create-hklgl" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.649620 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35725aa2-6c23-4676-a612-b169efb88e5b-operator-scripts\") pod \"nova-cell0-1458-account-create-update-v72tc\" (UID: \"35725aa2-6c23-4676-a612-b169efb88e5b\") " pod="openstack/nova-cell0-1458-account-create-update-v72tc" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.649910 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx99j\" (UniqueName: \"kubernetes.io/projected/35725aa2-6c23-4676-a612-b169efb88e5b-kube-api-access-vx99j\") pod \"nova-cell0-1458-account-create-update-v72tc\" (UID: \"35725aa2-6c23-4676-a612-b169efb88e5b\") " pod="openstack/nova-cell0-1458-account-create-update-v72tc" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.712496 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-q7h5p" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.757359 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35725aa2-6c23-4676-a612-b169efb88e5b-operator-scripts\") pod \"nova-cell0-1458-account-create-update-v72tc\" (UID: \"35725aa2-6c23-4676-a612-b169efb88e5b\") " pod="openstack/nova-cell0-1458-account-create-update-v72tc" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.757540 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx99j\" (UniqueName: \"kubernetes.io/projected/35725aa2-6c23-4676-a612-b169efb88e5b-kube-api-access-vx99j\") pod \"nova-cell0-1458-account-create-update-v72tc\" (UID: \"35725aa2-6c23-4676-a612-b169efb88e5b\") " pod="openstack/nova-cell0-1458-account-create-update-v72tc" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.758608 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35725aa2-6c23-4676-a612-b169efb88e5b-operator-scripts\") pod \"nova-cell0-1458-account-create-update-v72tc\" (UID: \"35725aa2-6c23-4676-a612-b169efb88e5b\") " pod="openstack/nova-cell0-1458-account-create-update-v72tc" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.761990 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-43e3-account-create-update-9c2q2"] Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.763643 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43e3-account-create-update-9c2q2" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.770736 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hklgl" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.775213 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.837503 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx99j\" (UniqueName: \"kubernetes.io/projected/35725aa2-6c23-4676-a612-b169efb88e5b-kube-api-access-vx99j\") pod \"nova-cell0-1458-account-create-update-v72tc\" (UID: \"35725aa2-6c23-4676-a612-b169efb88e5b\") " pod="openstack/nova-cell0-1458-account-create-update-v72tc" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.853521 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1458-account-create-update-v72tc" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.860979 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxg58\" (UniqueName: \"kubernetes.io/projected/a6c136c6-ca42-4080-ac37-582e3e86847f-kube-api-access-mxg58\") pod \"nova-cell1-43e3-account-create-update-9c2q2\" (UID: \"a6c136c6-ca42-4080-ac37-582e3e86847f\") " pod="openstack/nova-cell1-43e3-account-create-update-9c2q2" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.861064 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6c136c6-ca42-4080-ac37-582e3e86847f-operator-scripts\") pod \"nova-cell1-43e3-account-create-update-9c2q2\" (UID: \"a6c136c6-ca42-4080-ac37-582e3e86847f\") " pod="openstack/nova-cell1-43e3-account-create-update-9c2q2" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.877815 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-43e3-account-create-update-9c2q2"] Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.920703 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03014e29-2486-4bde-9c21-8f7b8dac7b3c","Type":"ContainerStarted","Data":"def42cf20ce45bb89f7c3beab9aa1f9243be5159886924eb9852a4fbd1703a44"} Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.963944 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxg58\" (UniqueName: \"kubernetes.io/projected/a6c136c6-ca42-4080-ac37-582e3e86847f-kube-api-access-mxg58\") pod \"nova-cell1-43e3-account-create-update-9c2q2\" (UID: \"a6c136c6-ca42-4080-ac37-582e3e86847f\") " pod="openstack/nova-cell1-43e3-account-create-update-9c2q2" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.964023 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6c136c6-ca42-4080-ac37-582e3e86847f-operator-scripts\") pod \"nova-cell1-43e3-account-create-update-9c2q2\" (UID: \"a6c136c6-ca42-4080-ac37-582e3e86847f\") " pod="openstack/nova-cell1-43e3-account-create-update-9c2q2" Jan 23 06:43:36 crc kubenswrapper[4784]: I0123 06:43:36.979407 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6c136c6-ca42-4080-ac37-582e3e86847f-operator-scripts\") pod \"nova-cell1-43e3-account-create-update-9c2q2\" (UID: \"a6c136c6-ca42-4080-ac37-582e3e86847f\") " pod="openstack/nova-cell1-43e3-account-create-update-9c2q2" Jan 23 06:43:37 crc kubenswrapper[4784]: I0123 06:43:37.021967 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxg58\" (UniqueName: \"kubernetes.io/projected/a6c136c6-ca42-4080-ac37-582e3e86847f-kube-api-access-mxg58\") pod \"nova-cell1-43e3-account-create-update-9c2q2\" (UID: \"a6c136c6-ca42-4080-ac37-582e3e86847f\") " pod="openstack/nova-cell1-43e3-account-create-update-9c2q2" Jan 23 06:43:37 crc kubenswrapper[4784]: I0123 06:43:37.130667 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43e3-account-create-update-9c2q2" Jan 23 06:43:37 crc kubenswrapper[4784]: I0123 06:43:37.238657 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-z6kmb"] Jan 23 06:43:37 crc kubenswrapper[4784]: W0123 06:43:37.428576 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8576e5ec_00dc_45b9_93b2_b76f32e3e92d.slice/crio-3be0beac7ce8f906a235b93eb71165a56771396816917ecf053415e4f3cb48f8 WatchSource:0}: Error finding container 3be0beac7ce8f906a235b93eb71165a56771396816917ecf053415e4f3cb48f8: Status 404 returned error can't find the container with id 3be0beac7ce8f906a235b93eb71165a56771396816917ecf053415e4f3cb48f8 Jan 23 06:43:37 crc kubenswrapper[4784]: I0123 06:43:37.689494 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-q7h5p"] Jan 23 06:43:37 crc kubenswrapper[4784]: W0123 06:43:37.710906 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea803a89_1983_44be_bf13_ac41e92eec7e.slice/crio-7acca836024ed6c4afb1eb91b171a85aff505e69805eb9775ba00ea795e93cea WatchSource:0}: Error finding container 7acca836024ed6c4afb1eb91b171a85aff505e69805eb9775ba00ea795e93cea: Status 404 returned error can't find the container with id 7acca836024ed6c4afb1eb91b171a85aff505e69805eb9775ba00ea795e93cea Jan 23 06:43:37 crc kubenswrapper[4784]: I0123 06:43:37.744004 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b75f-account-create-update-nlbg8"] Jan 23 06:43:37 crc kubenswrapper[4784]: I0123 06:43:37.959872 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1458-account-create-update-v72tc"] Jan 23 06:43:38 crc kubenswrapper[4784]: I0123 06:43:38.009983 4784 generic.go:334] "Generic (PLEG): container finished" podID="3565a005-cf5e-43c0-ab31-59071dc6fb9c" containerID="0113bed312592c480c7f7b5dc6a9466b8552bc8d763951f25f640709e4cf3757" exitCode=143 Jan 23 06:43:38 crc kubenswrapper[4784]: I0123 06:43:38.010306 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3565a005-cf5e-43c0-ab31-59071dc6fb9c","Type":"ContainerDied","Data":"0113bed312592c480c7f7b5dc6a9466b8552bc8d763951f25f640709e4cf3757"} Jan 23 06:43:38 crc kubenswrapper[4784]: I0123 06:43:38.016521 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-q7h5p" event={"ID":"ea803a89-1983-44be-bf13-ac41e92eec7e","Type":"ContainerStarted","Data":"7acca836024ed6c4afb1eb91b171a85aff505e69805eb9775ba00ea795e93cea"} Jan 23 06:43:38 crc kubenswrapper[4784]: I0123 06:43:38.023234 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-z6kmb" event={"ID":"8576e5ec-00dc-45b9-93b2-b76f32e3e92d","Type":"ContainerStarted","Data":"3be0beac7ce8f906a235b93eb71165a56771396816917ecf053415e4f3cb48f8"} Jan 23 06:43:38 crc kubenswrapper[4784]: I0123 06:43:38.024965 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b75f-account-create-update-nlbg8" event={"ID":"1c060372-b812-4f94-90c1-b87a4a20c12e","Type":"ContainerStarted","Data":"d1e91befb70e55636df17815b572de1be0b6bb347b5cbf21d7f48e2d9fb65fc4"} Jan 23 06:43:38 crc kubenswrapper[4784]: I0123 06:43:38.062908 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hklgl"] Jan 23 06:43:38 crc kubenswrapper[4784]: I0123 06:43:38.303055 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-43e3-account-create-update-9c2q2"] Jan 23 06:43:39 crc kubenswrapper[4784]: I0123 06:43:39.048739 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1458-account-create-update-v72tc" event={"ID":"35725aa2-6c23-4676-a612-b169efb88e5b","Type":"ContainerStarted","Data":"ca9ab8623cbdfb7e71bc4c9f5cb4608a5b0db8854890781ddd56244a030d3b7e"} Jan 23 06:43:39 crc kubenswrapper[4784]: I0123 06:43:39.051001 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1458-account-create-update-v72tc" event={"ID":"35725aa2-6c23-4676-a612-b169efb88e5b","Type":"ContainerStarted","Data":"dd4beaeaa428cd3eb24fbe6ba3c8320d890265a3b69a3e256e0a873f50b14cc6"} Jan 23 06:43:39 crc kubenswrapper[4784]: I0123 06:43:39.053987 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hklgl" event={"ID":"384a5279-9005-4fd7-882e-e14349adfe06","Type":"ContainerStarted","Data":"8ef82e05c36bb7d3fb3d11394393c8ee251546855b91bc2e56368ee9d2c74116"} Jan 23 06:43:39 crc kubenswrapper[4784]: I0123 06:43:39.054091 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hklgl" event={"ID":"384a5279-9005-4fd7-882e-e14349adfe06","Type":"ContainerStarted","Data":"d7e0213ad98457c233297f91aa27fabba3723e435c1e1f8152006022232edd10"} Jan 23 06:43:39 crc kubenswrapper[4784]: I0123 06:43:39.062380 4784 generic.go:334] "Generic (PLEG): container finished" podID="ea803a89-1983-44be-bf13-ac41e92eec7e" containerID="6b93e36a22a950541a42251c9be727e7ff4492866306647db8fddc74b9c95e6d" exitCode=0 Jan 23 06:43:39 crc kubenswrapper[4784]: I0123 06:43:39.062538 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-q7h5p" event={"ID":"ea803a89-1983-44be-bf13-ac41e92eec7e","Type":"ContainerDied","Data":"6b93e36a22a950541a42251c9be727e7ff4492866306647db8fddc74b9c95e6d"} Jan 23 06:43:39 crc kubenswrapper[4784]: I0123 06:43:39.065741 4784 generic.go:334] "Generic (PLEG): container finished" podID="8576e5ec-00dc-45b9-93b2-b76f32e3e92d" containerID="101cf29ae09d0239b57abaf7afba3ce1d158cb25ddbf52892ba3f5f01453dc45" exitCode=0 Jan 23 06:43:39 crc kubenswrapper[4784]: I0123 06:43:39.065847 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-z6kmb" event={"ID":"8576e5ec-00dc-45b9-93b2-b76f32e3e92d","Type":"ContainerDied","Data":"101cf29ae09d0239b57abaf7afba3ce1d158cb25ddbf52892ba3f5f01453dc45"} Jan 23 06:43:39 crc kubenswrapper[4784]: I0123 06:43:39.074938 4784 generic.go:334] "Generic (PLEG): container finished" podID="1c060372-b812-4f94-90c1-b87a4a20c12e" containerID="dfb52d448c5e9d801573262cc204f60db14b6d457e22580adea27afa008f2401" exitCode=0 Jan 23 06:43:39 crc kubenswrapper[4784]: I0123 06:43:39.075241 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b75f-account-create-update-nlbg8" event={"ID":"1c060372-b812-4f94-90c1-b87a4a20c12e","Type":"ContainerDied","Data":"dfb52d448c5e9d801573262cc204f60db14b6d457e22580adea27afa008f2401"} Jan 23 06:43:39 crc kubenswrapper[4784]: I0123 06:43:39.078876 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1458-account-create-update-v72tc" podStartSLOduration=3.078849312 podStartE2EDuration="3.078849312s" podCreationTimestamp="2026-01-23 06:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:43:39.075502309 +0000 UTC m=+1422.308010283" watchObservedRunningTime="2026-01-23 06:43:39.078849312 +0000 UTC m=+1422.311357276" Jan 23 06:43:39 crc kubenswrapper[4784]: I0123 06:43:39.095796 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03014e29-2486-4bde-9c21-8f7b8dac7b3c","Type":"ContainerStarted","Data":"e0c1dd93d9182f839fac6c73ac59ef5a9a8477a6f6a13f241979f49c03d94218"} Jan 23 06:43:39 crc kubenswrapper[4784]: I0123 06:43:39.102112 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-43e3-account-create-update-9c2q2" event={"ID":"a6c136c6-ca42-4080-ac37-582e3e86847f","Type":"ContainerStarted","Data":"a6b429214cab838e529698897eb23c7413cb204ef872f2f18ca2be34a1671aea"} Jan 23 06:43:39 crc kubenswrapper[4784]: I0123 06:43:39.149334 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-hklgl" podStartSLOduration=3.149299696 podStartE2EDuration="3.149299696s" podCreationTimestamp="2026-01-23 06:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:43:39.14498855 +0000 UTC m=+1422.377496524" watchObservedRunningTime="2026-01-23 06:43:39.149299696 +0000 UTC m=+1422.381807670" Jan 23 06:43:39 crc kubenswrapper[4784]: I0123 06:43:39.194433 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.116812 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03014e29-2486-4bde-9c21-8f7b8dac7b3c","Type":"ContainerStarted","Data":"3c355d0c43121e621d33a3f0e53cb1cc1cd269e82c7cc706afacbb54bb42d479"} Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.119340 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-43e3-account-create-update-9c2q2" event={"ID":"a6c136c6-ca42-4080-ac37-582e3e86847f","Type":"ContainerStarted","Data":"d6f1f280bdd9658fb63118eb1be953d286c228ebaccc4e7732a2527be84f7df3"} Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.611305 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-z6kmb" Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.792574 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8576e5ec-00dc-45b9-93b2-b76f32e3e92d-operator-scripts\") pod \"8576e5ec-00dc-45b9-93b2-b76f32e3e92d\" (UID: \"8576e5ec-00dc-45b9-93b2-b76f32e3e92d\") " Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.792728 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pwtt\" (UniqueName: \"kubernetes.io/projected/8576e5ec-00dc-45b9-93b2-b76f32e3e92d-kube-api-access-5pwtt\") pod \"8576e5ec-00dc-45b9-93b2-b76f32e3e92d\" (UID: \"8576e5ec-00dc-45b9-93b2-b76f32e3e92d\") " Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.793622 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8576e5ec-00dc-45b9-93b2-b76f32e3e92d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8576e5ec-00dc-45b9-93b2-b76f32e3e92d" (UID: "8576e5ec-00dc-45b9-93b2-b76f32e3e92d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.797487 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-q7h5p" Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.802008 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8576e5ec-00dc-45b9-93b2-b76f32e3e92d-kube-api-access-5pwtt" (OuterVolumeSpecName: "kube-api-access-5pwtt") pod "8576e5ec-00dc-45b9-93b2-b76f32e3e92d" (UID: "8576e5ec-00dc-45b9-93b2-b76f32e3e92d"). InnerVolumeSpecName "kube-api-access-5pwtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.805675 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b75f-account-create-update-nlbg8" Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.895996 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkrdl\" (UniqueName: \"kubernetes.io/projected/ea803a89-1983-44be-bf13-ac41e92eec7e-kube-api-access-rkrdl\") pod \"ea803a89-1983-44be-bf13-ac41e92eec7e\" (UID: \"ea803a89-1983-44be-bf13-ac41e92eec7e\") " Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.896407 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c060372-b812-4f94-90c1-b87a4a20c12e-operator-scripts\") pod \"1c060372-b812-4f94-90c1-b87a4a20c12e\" (UID: \"1c060372-b812-4f94-90c1-b87a4a20c12e\") " Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.896614 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea803a89-1983-44be-bf13-ac41e92eec7e-operator-scripts\") pod \"ea803a89-1983-44be-bf13-ac41e92eec7e\" (UID: \"ea803a89-1983-44be-bf13-ac41e92eec7e\") " Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.896780 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j5pv\" (UniqueName: \"kubernetes.io/projected/1c060372-b812-4f94-90c1-b87a4a20c12e-kube-api-access-8j5pv\") pod \"1c060372-b812-4f94-90c1-b87a4a20c12e\" (UID: \"1c060372-b812-4f94-90c1-b87a4a20c12e\") " Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.897606 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pwtt\" (UniqueName: \"kubernetes.io/projected/8576e5ec-00dc-45b9-93b2-b76f32e3e92d-kube-api-access-5pwtt\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.897740 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8576e5ec-00dc-45b9-93b2-b76f32e3e92d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.897606 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c060372-b812-4f94-90c1-b87a4a20c12e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c060372-b812-4f94-90c1-b87a4a20c12e" (UID: "1c060372-b812-4f94-90c1-b87a4a20c12e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.897666 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea803a89-1983-44be-bf13-ac41e92eec7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea803a89-1983-44be-bf13-ac41e92eec7e" (UID: "ea803a89-1983-44be-bf13-ac41e92eec7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.906075 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c060372-b812-4f94-90c1-b87a4a20c12e-kube-api-access-8j5pv" (OuterVolumeSpecName: "kube-api-access-8j5pv") pod "1c060372-b812-4f94-90c1-b87a4a20c12e" (UID: "1c060372-b812-4f94-90c1-b87a4a20c12e"). InnerVolumeSpecName "kube-api-access-8j5pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:40 crc kubenswrapper[4784]: I0123 06:43:40.906968 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea803a89-1983-44be-bf13-ac41e92eec7e-kube-api-access-rkrdl" (OuterVolumeSpecName: "kube-api-access-rkrdl") pod "ea803a89-1983-44be-bf13-ac41e92eec7e" (UID: "ea803a89-1983-44be-bf13-ac41e92eec7e"). InnerVolumeSpecName "kube-api-access-rkrdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:41 crc kubenswrapper[4784]: I0123 06:43:41.000325 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea803a89-1983-44be-bf13-ac41e92eec7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:41 crc kubenswrapper[4784]: I0123 06:43:41.000368 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j5pv\" (UniqueName: \"kubernetes.io/projected/1c060372-b812-4f94-90c1-b87a4a20c12e-kube-api-access-8j5pv\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:41 crc kubenswrapper[4784]: I0123 06:43:41.000381 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkrdl\" (UniqueName: \"kubernetes.io/projected/ea803a89-1983-44be-bf13-ac41e92eec7e-kube-api-access-rkrdl\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:41 crc kubenswrapper[4784]: I0123 06:43:41.000391 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c060372-b812-4f94-90c1-b87a4a20c12e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:41 crc kubenswrapper[4784]: I0123 06:43:41.143019 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-q7h5p" Jan 23 06:43:41 crc kubenswrapper[4784]: I0123 06:43:41.147234 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-q7h5p" event={"ID":"ea803a89-1983-44be-bf13-ac41e92eec7e","Type":"ContainerDied","Data":"7acca836024ed6c4afb1eb91b171a85aff505e69805eb9775ba00ea795e93cea"} Jan 23 06:43:41 crc kubenswrapper[4784]: I0123 06:43:41.147287 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7acca836024ed6c4afb1eb91b171a85aff505e69805eb9775ba00ea795e93cea" Jan 23 06:43:41 crc kubenswrapper[4784]: I0123 06:43:41.152865 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-z6kmb" event={"ID":"8576e5ec-00dc-45b9-93b2-b76f32e3e92d","Type":"ContainerDied","Data":"3be0beac7ce8f906a235b93eb71165a56771396816917ecf053415e4f3cb48f8"} Jan 23 06:43:41 crc kubenswrapper[4784]: I0123 06:43:41.152923 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3be0beac7ce8f906a235b93eb71165a56771396816917ecf053415e4f3cb48f8" Jan 23 06:43:41 crc kubenswrapper[4784]: I0123 06:43:41.152973 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-z6kmb" Jan 23 06:43:41 crc kubenswrapper[4784]: I0123 06:43:41.156813 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b75f-account-create-update-nlbg8" event={"ID":"1c060372-b812-4f94-90c1-b87a4a20c12e","Type":"ContainerDied","Data":"d1e91befb70e55636df17815b572de1be0b6bb347b5cbf21d7f48e2d9fb65fc4"} Jan 23 06:43:41 crc kubenswrapper[4784]: I0123 06:43:41.156855 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1e91befb70e55636df17815b572de1be0b6bb347b5cbf21d7f48e2d9fb65fc4" Jan 23 06:43:41 crc kubenswrapper[4784]: I0123 06:43:41.156908 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b75f-account-create-update-nlbg8" Jan 23 06:43:41 crc kubenswrapper[4784]: I0123 06:43:41.191961 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-43e3-account-create-update-9c2q2" podStartSLOduration=5.191928401 podStartE2EDuration="5.191928401s" podCreationTimestamp="2026-01-23 06:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:43:41.180645743 +0000 UTC m=+1424.413153727" watchObservedRunningTime="2026-01-23 06:43:41.191928401 +0000 UTC m=+1424.424436375" Jan 23 06:43:42 crc kubenswrapper[4784]: I0123 06:43:42.173212 4784 generic.go:334] "Generic (PLEG): container finished" podID="3565a005-cf5e-43c0-ab31-59071dc6fb9c" containerID="f7234ac9ecffe753b48da6d5bfd59256741840b45b7b8c635986505e2bff1cbf" exitCode=0 Jan 23 06:43:42 crc kubenswrapper[4784]: I0123 06:43:42.173313 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3565a005-cf5e-43c0-ab31-59071dc6fb9c","Type":"ContainerDied","Data":"f7234ac9ecffe753b48da6d5bfd59256741840b45b7b8c635986505e2bff1cbf"} Jan 23 06:43:43 crc kubenswrapper[4784]: I0123 06:43:43.340973 4784 generic.go:334] "Generic (PLEG): container finished" podID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerID="0b025da38950e35051ff144502203a873d0391f48eac9ab72a2003adfd788b87" exitCode=137 Jan 23 06:43:43 crc kubenswrapper[4784]: I0123 06:43:43.341484 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79d47d6854-hfx9p" event={"ID":"8d31d380-7e87-4ce6-bbfe-5f3788456978","Type":"ContainerDied","Data":"0b025da38950e35051ff144502203a873d0391f48eac9ab72a2003adfd788b87"} Jan 23 06:43:43 crc kubenswrapper[4784]: I0123 06:43:43.466842 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:43:43 crc kubenswrapper[4784]: I0123 06:43:43.467195 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7c82e190-0062-4ebc-8ee5-74401deb567e" containerName="glance-log" containerID="cri-o://8669082937e9493abab428f1b051a2f4bb1da7db547817840b6950cf8b812fda" gracePeriod=30 Jan 23 06:43:43 crc kubenswrapper[4784]: I0123 06:43:43.467395 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7c82e190-0062-4ebc-8ee5-74401deb567e" containerName="glance-httpd" containerID="cri-o://ec5323f48b812b692dfaabd80a389c2a19eeada8d9f97d51239b731c537cbc7c" gracePeriod=30 Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.111737 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.120045 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.197832 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-combined-ca-bundle\") pod \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.197918 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-horizon-secret-key\") pod \"8d31d380-7e87-4ce6-bbfe-5f3788456978\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.198046 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24l8q\" (UniqueName: \"kubernetes.io/projected/3565a005-cf5e-43c0-ab31-59071dc6fb9c-kube-api-access-24l8q\") pod \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.198081 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-combined-ca-bundle\") pod \"8d31d380-7e87-4ce6-bbfe-5f3788456978\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.198184 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-scripts\") pod \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.198227 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.198255 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3565a005-cf5e-43c0-ab31-59071dc6fb9c-logs\") pod \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.198297 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-internal-tls-certs\") pod \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.198376 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-horizon-tls-certs\") pod \"8d31d380-7e87-4ce6-bbfe-5f3788456978\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.198470 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d31d380-7e87-4ce6-bbfe-5f3788456978-scripts\") pod \"8d31d380-7e87-4ce6-bbfe-5f3788456978\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.198512 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3565a005-cf5e-43c0-ab31-59071dc6fb9c-httpd-run\") pod \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.198560 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-config-data\") pod \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\" (UID: \"3565a005-cf5e-43c0-ab31-59071dc6fb9c\") " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.198600 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d31d380-7e87-4ce6-bbfe-5f3788456978-config-data\") pod \"8d31d380-7e87-4ce6-bbfe-5f3788456978\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.198623 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7szqd\" (UniqueName: \"kubernetes.io/projected/8d31d380-7e87-4ce6-bbfe-5f3788456978-kube-api-access-7szqd\") pod \"8d31d380-7e87-4ce6-bbfe-5f3788456978\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.198661 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d31d380-7e87-4ce6-bbfe-5f3788456978-logs\") pod \"8d31d380-7e87-4ce6-bbfe-5f3788456978\" (UID: \"8d31d380-7e87-4ce6-bbfe-5f3788456978\") " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.199483 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3565a005-cf5e-43c0-ab31-59071dc6fb9c-logs" (OuterVolumeSpecName: "logs") pod "3565a005-cf5e-43c0-ab31-59071dc6fb9c" (UID: "3565a005-cf5e-43c0-ab31-59071dc6fb9c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.200098 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d31d380-7e87-4ce6-bbfe-5f3788456978-logs" (OuterVolumeSpecName: "logs") pod "8d31d380-7e87-4ce6-bbfe-5f3788456978" (UID: "8d31d380-7e87-4ce6-bbfe-5f3788456978"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.202744 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3565a005-cf5e-43c0-ab31-59071dc6fb9c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3565a005-cf5e-43c0-ab31-59071dc6fb9c" (UID: "3565a005-cf5e-43c0-ab31-59071dc6fb9c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.265766 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d31d380-7e87-4ce6-bbfe-5f3788456978-kube-api-access-7szqd" (OuterVolumeSpecName: "kube-api-access-7szqd") pod "8d31d380-7e87-4ce6-bbfe-5f3788456978" (UID: "8d31d380-7e87-4ce6-bbfe-5f3788456978"). InnerVolumeSpecName "kube-api-access-7szqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.269582 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8d31d380-7e87-4ce6-bbfe-5f3788456978" (UID: "8d31d380-7e87-4ce6-bbfe-5f3788456978"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.271933 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "3565a005-cf5e-43c0-ab31-59071dc6fb9c" (UID: "3565a005-cf5e-43c0-ab31-59071dc6fb9c"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.286956 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3565a005-cf5e-43c0-ab31-59071dc6fb9c-kube-api-access-24l8q" (OuterVolumeSpecName: "kube-api-access-24l8q") pod "3565a005-cf5e-43c0-ab31-59071dc6fb9c" (UID: "3565a005-cf5e-43c0-ab31-59071dc6fb9c"). InnerVolumeSpecName "kube-api-access-24l8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.292358 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-scripts" (OuterVolumeSpecName: "scripts") pod "3565a005-cf5e-43c0-ab31-59071dc6fb9c" (UID: "3565a005-cf5e-43c0-ab31-59071dc6fb9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.303868 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d31d380-7e87-4ce6-bbfe-5f3788456978-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.303913 4784 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.303924 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24l8q\" (UniqueName: \"kubernetes.io/projected/3565a005-cf5e-43c0-ab31-59071dc6fb9c-kube-api-access-24l8q\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.303933 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.303943 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3565a005-cf5e-43c0-ab31-59071dc6fb9c-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.303972 4784 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.303982 4784 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3565a005-cf5e-43c0-ab31-59071dc6fb9c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.303991 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7szqd\" (UniqueName: \"kubernetes.io/projected/8d31d380-7e87-4ce6-bbfe-5f3788456978-kube-api-access-7szqd\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.322259 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d31d380-7e87-4ce6-bbfe-5f3788456978-config-data" (OuterVolumeSpecName: "config-data") pod "8d31d380-7e87-4ce6-bbfe-5f3788456978" (UID: "8d31d380-7e87-4ce6-bbfe-5f3788456978"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.369295 4784 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.369930 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "8d31d380-7e87-4ce6-bbfe-5f3788456978" (UID: "8d31d380-7e87-4ce6-bbfe-5f3788456978"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.382142 4784 generic.go:334] "Generic (PLEG): container finished" podID="7c82e190-0062-4ebc-8ee5-74401deb567e" containerID="8669082937e9493abab428f1b051a2f4bb1da7db547817840b6950cf8b812fda" exitCode=143 Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.382248 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c82e190-0062-4ebc-8ee5-74401deb567e","Type":"ContainerDied","Data":"8669082937e9493abab428f1b051a2f4bb1da7db547817840b6950cf8b812fda"} Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.388344 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d31d380-7e87-4ce6-bbfe-5f3788456978-scripts" (OuterVolumeSpecName: "scripts") pod "8d31d380-7e87-4ce6-bbfe-5f3788456978" (UID: "8d31d380-7e87-4ce6-bbfe-5f3788456978"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.389062 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3565a005-cf5e-43c0-ab31-59071dc6fb9c" (UID: "3565a005-cf5e-43c0-ab31-59071dc6fb9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:44 crc kubenswrapper[4784]: E0123 06:43:44.397604 4784 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod384a5279_9005_4fd7_882e_e14349adfe06.slice/crio-8ef82e05c36bb7d3fb3d11394393c8ee251546855b91bc2e56368ee9d2c74116.scope\": RecentStats: unable to find data in memory cache]" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.406194 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d31d380-7e87-4ce6-bbfe-5f3788456978-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.406248 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.406260 4784 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.406271 4784 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.406281 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d31d380-7e87-4ce6-bbfe-5f3788456978-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.432930 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d31d380-7e87-4ce6-bbfe-5f3788456978" (UID: "8d31d380-7e87-4ce6-bbfe-5f3788456978"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.434980 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3565a005-cf5e-43c0-ab31-59071dc6fb9c" (UID: "3565a005-cf5e-43c0-ab31-59071dc6fb9c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.440004 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03014e29-2486-4bde-9c21-8f7b8dac7b3c","Type":"ContainerStarted","Data":"79a05d75f8ec4e4611a09229621e1d9ab330bf7ed02d9596e05e3b36255398b6"} Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.440267 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="ceilometer-central-agent" containerID="cri-o://def42cf20ce45bb89f7c3beab9aa1f9243be5159886924eb9852a4fbd1703a44" gracePeriod=30 Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.440357 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.440798 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="proxy-httpd" containerID="cri-o://79a05d75f8ec4e4611a09229621e1d9ab330bf7ed02d9596e05e3b36255398b6" gracePeriod=30 Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.440854 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="sg-core" containerID="cri-o://3c355d0c43121e621d33a3f0e53cb1cc1cd269e82c7cc706afacbb54bb42d479" gracePeriod=30 Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.440895 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="ceilometer-notification-agent" containerID="cri-o://e0c1dd93d9182f839fac6c73ac59ef5a9a8477a6f6a13f241979f49c03d94218" gracePeriod=30 Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.454995 4784 generic.go:334] "Generic (PLEG): container finished" podID="a6c136c6-ca42-4080-ac37-582e3e86847f" containerID="d6f1f280bdd9658fb63118eb1be953d286c228ebaccc4e7732a2527be84f7df3" exitCode=0 Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.455072 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-43e3-account-create-update-9c2q2" event={"ID":"a6c136c6-ca42-4080-ac37-582e3e86847f","Type":"ContainerDied","Data":"d6f1f280bdd9658fb63118eb1be953d286c228ebaccc4e7732a2527be84f7df3"} Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.483571 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.085865802 podStartE2EDuration="10.483534632s" podCreationTimestamp="2026-01-23 06:43:34 +0000 UTC" firstStartedPulling="2026-01-23 06:43:35.275329329 +0000 UTC m=+1418.507837303" lastFinishedPulling="2026-01-23 06:43:43.672998159 +0000 UTC m=+1426.905506133" observedRunningTime="2026-01-23 06:43:44.482422645 +0000 UTC m=+1427.714930619" watchObservedRunningTime="2026-01-23 06:43:44.483534632 +0000 UTC m=+1427.716042606" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.493791 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.493798 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3565a005-cf5e-43c0-ab31-59071dc6fb9c","Type":"ContainerDied","Data":"0fbd53a084677eb9fff192e0dbd7fda4301b3b0c38333286bdbe97c2fb2038d1"} Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.494077 4784 scope.go:117] "RemoveContainer" containerID="f7234ac9ecffe753b48da6d5bfd59256741840b45b7b8c635986505e2bff1cbf" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.509244 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79d47d6854-hfx9p" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.509499 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79d47d6854-hfx9p" event={"ID":"8d31d380-7e87-4ce6-bbfe-5f3788456978","Type":"ContainerDied","Data":"f23f4afd6f8b4d148b28d5d4cad8bc872ad030747cbba30816101fd6d3133005"} Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.509919 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-config-data" (OuterVolumeSpecName: "config-data") pod "3565a005-cf5e-43c0-ab31-59071dc6fb9c" (UID: "3565a005-cf5e-43c0-ab31-59071dc6fb9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.510918 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d31d380-7e87-4ce6-bbfe-5f3788456978-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.513483 4784 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.513510 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3565a005-cf5e-43c0-ab31-59071dc6fb9c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.518178 4784 generic.go:334] "Generic (PLEG): container finished" podID="35725aa2-6c23-4676-a612-b169efb88e5b" containerID="ca9ab8623cbdfb7e71bc4c9f5cb4608a5b0db8854890781ddd56244a030d3b7e" exitCode=0 Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.518402 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1458-account-create-update-v72tc" event={"ID":"35725aa2-6c23-4676-a612-b169efb88e5b","Type":"ContainerDied","Data":"ca9ab8623cbdfb7e71bc4c9f5cb4608a5b0db8854890781ddd56244a030d3b7e"} Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.527543 4784 generic.go:334] "Generic (PLEG): container finished" podID="384a5279-9005-4fd7-882e-e14349adfe06" containerID="8ef82e05c36bb7d3fb3d11394393c8ee251546855b91bc2e56368ee9d2c74116" exitCode=0 Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.527615 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hklgl" event={"ID":"384a5279-9005-4fd7-882e-e14349adfe06","Type":"ContainerDied","Data":"8ef82e05c36bb7d3fb3d11394393c8ee251546855b91bc2e56368ee9d2c74116"} Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.543336 4784 scope.go:117] "RemoveContainer" containerID="0113bed312592c480c7f7b5dc6a9466b8552bc8d763951f25f640709e4cf3757" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.596344 4784 scope.go:117] "RemoveContainer" containerID="8ceae607bf3d1a305e21df79b6d78c685530e9c5947012ef6b094625790484a4" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.606774 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79d47d6854-hfx9p"] Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.617856 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-79d47d6854-hfx9p"] Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.790080 4784 scope.go:117] "RemoveContainer" containerID="0b025da38950e35051ff144502203a873d0391f48eac9ab72a2003adfd788b87" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.860561 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.878657 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.895523 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:43:44 crc kubenswrapper[4784]: E0123 06:43:44.896261 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerName="horizon" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.896286 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerName="horizon" Jan 23 06:43:44 crc kubenswrapper[4784]: E0123 06:43:44.896304 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea803a89-1983-44be-bf13-ac41e92eec7e" containerName="mariadb-database-create" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.896312 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea803a89-1983-44be-bf13-ac41e92eec7e" containerName="mariadb-database-create" Jan 23 06:43:44 crc kubenswrapper[4784]: E0123 06:43:44.896569 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerName="horizon-log" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.896581 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerName="horizon-log" Jan 23 06:43:44 crc kubenswrapper[4784]: E0123 06:43:44.896602 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8576e5ec-00dc-45b9-93b2-b76f32e3e92d" containerName="mariadb-database-create" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.896610 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8576e5ec-00dc-45b9-93b2-b76f32e3e92d" containerName="mariadb-database-create" Jan 23 06:43:44 crc kubenswrapper[4784]: E0123 06:43:44.896637 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3565a005-cf5e-43c0-ab31-59071dc6fb9c" containerName="glance-log" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.896652 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3565a005-cf5e-43c0-ab31-59071dc6fb9c" containerName="glance-log" Jan 23 06:43:44 crc kubenswrapper[4784]: E0123 06:43:44.896674 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c060372-b812-4f94-90c1-b87a4a20c12e" containerName="mariadb-account-create-update" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.896683 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c060372-b812-4f94-90c1-b87a4a20c12e" containerName="mariadb-account-create-update" Jan 23 06:43:44 crc kubenswrapper[4784]: E0123 06:43:44.896697 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3565a005-cf5e-43c0-ab31-59071dc6fb9c" containerName="glance-httpd" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.896704 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3565a005-cf5e-43c0-ab31-59071dc6fb9c" containerName="glance-httpd" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.896972 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="3565a005-cf5e-43c0-ab31-59071dc6fb9c" containerName="glance-log" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.897001 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="3565a005-cf5e-43c0-ab31-59071dc6fb9c" containerName="glance-httpd" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.897011 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea803a89-1983-44be-bf13-ac41e92eec7e" containerName="mariadb-database-create" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.897035 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerName="horizon" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.897055 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="8576e5ec-00dc-45b9-93b2-b76f32e3e92d" containerName="mariadb-database-create" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.897070 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" containerName="horizon-log" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.897085 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c060372-b812-4f94-90c1-b87a4a20c12e" containerName="mariadb-account-create-update" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.899796 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.903209 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.903413 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 06:43:44 crc kubenswrapper[4784]: I0123 06:43:44.913224 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.027599 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.027677 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.027938 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.028056 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.028150 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-logs\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.028350 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.028481 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.028605 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9brl\" (UniqueName: \"kubernetes.io/projected/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-kube-api-access-v9brl\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.130610 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9brl\" (UniqueName: \"kubernetes.io/projected/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-kube-api-access-v9brl\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.130775 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.130825 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.130911 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.130952 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.130974 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-logs\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.131029 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.131059 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.131580 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.132338 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.132492 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-logs\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.138465 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.141899 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.153784 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.176878 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9brl\" (UniqueName: \"kubernetes.io/projected/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-kube-api-access-v9brl\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.176986 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21429e2a-c0f1-47fa-8a30-0577e1e9e72c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.181165 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"21429e2a-c0f1-47fa-8a30-0577e1e9e72c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.270629 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3565a005-cf5e-43c0-ab31-59071dc6fb9c" path="/var/lib/kubelet/pods/3565a005-cf5e-43c0-ab31-59071dc6fb9c/volumes" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.271389 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d31d380-7e87-4ce6-bbfe-5f3788456978" path="/var/lib/kubelet/pods/8d31d380-7e87-4ce6-bbfe-5f3788456978/volumes" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.307456 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.556008 4784 generic.go:334] "Generic (PLEG): container finished" podID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerID="3c355d0c43121e621d33a3f0e53cb1cc1cd269e82c7cc706afacbb54bb42d479" exitCode=2 Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.556297 4784 generic.go:334] "Generic (PLEG): container finished" podID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerID="e0c1dd93d9182f839fac6c73ac59ef5a9a8477a6f6a13f241979f49c03d94218" exitCode=0 Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.556305 4784 generic.go:334] "Generic (PLEG): container finished" podID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerID="def42cf20ce45bb89f7c3beab9aa1f9243be5159886924eb9852a4fbd1703a44" exitCode=0 Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.556378 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03014e29-2486-4bde-9c21-8f7b8dac7b3c","Type":"ContainerDied","Data":"3c355d0c43121e621d33a3f0e53cb1cc1cd269e82c7cc706afacbb54bb42d479"} Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.556407 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03014e29-2486-4bde-9c21-8f7b8dac7b3c","Type":"ContainerDied","Data":"e0c1dd93d9182f839fac6c73ac59ef5a9a8477a6f6a13f241979f49c03d94218"} Jan 23 06:43:45 crc kubenswrapper[4784]: I0123 06:43:45.556417 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03014e29-2486-4bde-9c21-8f7b8dac7b3c","Type":"ContainerDied","Data":"def42cf20ce45bb89f7c3beab9aa1f9243be5159886924eb9852a4fbd1703a44"} Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.390557 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.536865 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hklgl" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.544980 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1458-account-create-update-v72tc" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.552301 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43e3-account-create-update-9c2q2" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.581958 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z584k\" (UniqueName: \"kubernetes.io/projected/384a5279-9005-4fd7-882e-e14349adfe06-kube-api-access-z584k\") pod \"384a5279-9005-4fd7-882e-e14349adfe06\" (UID: \"384a5279-9005-4fd7-882e-e14349adfe06\") " Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.582195 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35725aa2-6c23-4676-a612-b169efb88e5b-operator-scripts\") pod \"35725aa2-6c23-4676-a612-b169efb88e5b\" (UID: \"35725aa2-6c23-4676-a612-b169efb88e5b\") " Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.582249 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/384a5279-9005-4fd7-882e-e14349adfe06-operator-scripts\") pod \"384a5279-9005-4fd7-882e-e14349adfe06\" (UID: \"384a5279-9005-4fd7-882e-e14349adfe06\") " Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.582346 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx99j\" (UniqueName: \"kubernetes.io/projected/35725aa2-6c23-4676-a612-b169efb88e5b-kube-api-access-vx99j\") pod \"35725aa2-6c23-4676-a612-b169efb88e5b\" (UID: \"35725aa2-6c23-4676-a612-b169efb88e5b\") " Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.594425 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35725aa2-6c23-4676-a612-b169efb88e5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35725aa2-6c23-4676-a612-b169efb88e5b" (UID: "35725aa2-6c23-4676-a612-b169efb88e5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.599977 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384a5279-9005-4fd7-882e-e14349adfe06-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "384a5279-9005-4fd7-882e-e14349adfe06" (UID: "384a5279-9005-4fd7-882e-e14349adfe06"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.606250 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/384a5279-9005-4fd7-882e-e14349adfe06-kube-api-access-z584k" (OuterVolumeSpecName: "kube-api-access-z584k") pod "384a5279-9005-4fd7-882e-e14349adfe06" (UID: "384a5279-9005-4fd7-882e-e14349adfe06"). InnerVolumeSpecName "kube-api-access-z584k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.613658 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21429e2a-c0f1-47fa-8a30-0577e1e9e72c","Type":"ContainerStarted","Data":"fbbe3a7ff962bb73431c3073b6707705e800d506def36b3bb7afdaad7d1d6ea8"} Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.629197 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35725aa2-6c23-4676-a612-b169efb88e5b-kube-api-access-vx99j" (OuterVolumeSpecName: "kube-api-access-vx99j") pod "35725aa2-6c23-4676-a612-b169efb88e5b" (UID: "35725aa2-6c23-4676-a612-b169efb88e5b"). InnerVolumeSpecName "kube-api-access-vx99j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.632684 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-43e3-account-create-update-9c2q2" event={"ID":"a6c136c6-ca42-4080-ac37-582e3e86847f","Type":"ContainerDied","Data":"a6b429214cab838e529698897eb23c7413cb204ef872f2f18ca2be34a1671aea"} Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.632771 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b429214cab838e529698897eb23c7413cb204ef872f2f18ca2be34a1671aea" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.632880 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43e3-account-create-update-9c2q2" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.665178 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1458-account-create-update-v72tc" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.665177 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1458-account-create-update-v72tc" event={"ID":"35725aa2-6c23-4676-a612-b169efb88e5b","Type":"ContainerDied","Data":"dd4beaeaa428cd3eb24fbe6ba3c8320d890265a3b69a3e256e0a873f50b14cc6"} Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.667831 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd4beaeaa428cd3eb24fbe6ba3c8320d890265a3b69a3e256e0a873f50b14cc6" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.673237 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hklgl" event={"ID":"384a5279-9005-4fd7-882e-e14349adfe06","Type":"ContainerDied","Data":"d7e0213ad98457c233297f91aa27fabba3723e435c1e1f8152006022232edd10"} Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.673292 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7e0213ad98457c233297f91aa27fabba3723e435c1e1f8152006022232edd10" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.673374 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hklgl" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.685990 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxg58\" (UniqueName: \"kubernetes.io/projected/a6c136c6-ca42-4080-ac37-582e3e86847f-kube-api-access-mxg58\") pod \"a6c136c6-ca42-4080-ac37-582e3e86847f\" (UID: \"a6c136c6-ca42-4080-ac37-582e3e86847f\") " Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.686158 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6c136c6-ca42-4080-ac37-582e3e86847f-operator-scripts\") pod \"a6c136c6-ca42-4080-ac37-582e3e86847f\" (UID: \"a6c136c6-ca42-4080-ac37-582e3e86847f\") " Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.687046 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx99j\" (UniqueName: \"kubernetes.io/projected/35725aa2-6c23-4676-a612-b169efb88e5b-kube-api-access-vx99j\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.687075 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z584k\" (UniqueName: \"kubernetes.io/projected/384a5279-9005-4fd7-882e-e14349adfe06-kube-api-access-z584k\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.687090 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35725aa2-6c23-4676-a612-b169efb88e5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.687103 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/384a5279-9005-4fd7-882e-e14349adfe06-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.687735 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6c136c6-ca42-4080-ac37-582e3e86847f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6c136c6-ca42-4080-ac37-582e3e86847f" (UID: "a6c136c6-ca42-4080-ac37-582e3e86847f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.693826 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c136c6-ca42-4080-ac37-582e3e86847f-kube-api-access-mxg58" (OuterVolumeSpecName: "kube-api-access-mxg58") pod "a6c136c6-ca42-4080-ac37-582e3e86847f" (UID: "a6c136c6-ca42-4080-ac37-582e3e86847f"). InnerVolumeSpecName "kube-api-access-mxg58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.791147 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxg58\" (UniqueName: \"kubernetes.io/projected/a6c136c6-ca42-4080-ac37-582e3e86847f-kube-api-access-mxg58\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:46 crc kubenswrapper[4784]: I0123 06:43:46.791537 4784 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6c136c6-ca42-4080-ac37-582e3e86847f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.603130 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.724416 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-public-tls-certs\") pod \"7c82e190-0062-4ebc-8ee5-74401deb567e\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.726222 4784 generic.go:334] "Generic (PLEG): container finished" podID="7c82e190-0062-4ebc-8ee5-74401deb567e" containerID="ec5323f48b812b692dfaabd80a389c2a19eeada8d9f97d51239b731c537cbc7c" exitCode=0 Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.726386 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c82e190-0062-4ebc-8ee5-74401deb567e","Type":"ContainerDied","Data":"ec5323f48b812b692dfaabd80a389c2a19eeada8d9f97d51239b731c537cbc7c"} Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.726428 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c82e190-0062-4ebc-8ee5-74401deb567e","Type":"ContainerDied","Data":"1b79d7abcb7ef275a45e014daede5ee75a8d4a43723e49c2516022762a729924"} Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.726454 4784 scope.go:117] "RemoveContainer" containerID="ec5323f48b812b692dfaabd80a389c2a19eeada8d9f97d51239b731c537cbc7c" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.726803 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.731064 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21429e2a-c0f1-47fa-8a30-0577e1e9e72c","Type":"ContainerStarted","Data":"c9d66f0afcee4535b8f835cdc06862391d30cf139ea59395738557c8ed6a08af"} Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.733875 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c82e190-0062-4ebc-8ee5-74401deb567e-logs\") pod \"7c82e190-0062-4ebc-8ee5-74401deb567e\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.734182 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-scripts\") pod \"7c82e190-0062-4ebc-8ee5-74401deb567e\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.736692 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c82e190-0062-4ebc-8ee5-74401deb567e-logs" (OuterVolumeSpecName: "logs") pod "7c82e190-0062-4ebc-8ee5-74401deb567e" (UID: "7c82e190-0062-4ebc-8ee5-74401deb567e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.747354 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"7c82e190-0062-4ebc-8ee5-74401deb567e\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.747650 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jtm8\" (UniqueName: \"kubernetes.io/projected/7c82e190-0062-4ebc-8ee5-74401deb567e-kube-api-access-7jtm8\") pod \"7c82e190-0062-4ebc-8ee5-74401deb567e\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.747684 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7c82e190-0062-4ebc-8ee5-74401deb567e-httpd-run\") pod \"7c82e190-0062-4ebc-8ee5-74401deb567e\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.748363 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-combined-ca-bundle\") pod \"7c82e190-0062-4ebc-8ee5-74401deb567e\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.748525 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-config-data\") pod \"7c82e190-0062-4ebc-8ee5-74401deb567e\" (UID: \"7c82e190-0062-4ebc-8ee5-74401deb567e\") " Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.752521 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c82e190-0062-4ebc-8ee5-74401deb567e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7c82e190-0062-4ebc-8ee5-74401deb567e" (UID: "7c82e190-0062-4ebc-8ee5-74401deb567e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.753798 4784 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7c82e190-0062-4ebc-8ee5-74401deb567e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.755043 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c82e190-0062-4ebc-8ee5-74401deb567e-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.781864 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "7c82e190-0062-4ebc-8ee5-74401deb567e" (UID: "7c82e190-0062-4ebc-8ee5-74401deb567e"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.782863 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c82e190-0062-4ebc-8ee5-74401deb567e-kube-api-access-7jtm8" (OuterVolumeSpecName: "kube-api-access-7jtm8") pod "7c82e190-0062-4ebc-8ee5-74401deb567e" (UID: "7c82e190-0062-4ebc-8ee5-74401deb567e"). InnerVolumeSpecName "kube-api-access-7jtm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.793008 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-scripts" (OuterVolumeSpecName: "scripts") pod "7c82e190-0062-4ebc-8ee5-74401deb567e" (UID: "7c82e190-0062-4ebc-8ee5-74401deb567e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.804864 4784 scope.go:117] "RemoveContainer" containerID="8669082937e9493abab428f1b051a2f4bb1da7db547817840b6950cf8b812fda" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.808260 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7c82e190-0062-4ebc-8ee5-74401deb567e" (UID: "7c82e190-0062-4ebc-8ee5-74401deb567e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.850624 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c82e190-0062-4ebc-8ee5-74401deb567e" (UID: "7c82e190-0062-4ebc-8ee5-74401deb567e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.865230 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-config-data" (OuterVolumeSpecName: "config-data") pod "7c82e190-0062-4ebc-8ee5-74401deb567e" (UID: "7c82e190-0062-4ebc-8ee5-74401deb567e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.872567 4784 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.872792 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.881547 4784 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.882228 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jtm8\" (UniqueName: \"kubernetes.io/projected/7c82e190-0062-4ebc-8ee5-74401deb567e-kube-api-access-7jtm8\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.882306 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.882366 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c82e190-0062-4ebc-8ee5-74401deb567e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.893406 4784 scope.go:117] "RemoveContainer" containerID="ec5323f48b812b692dfaabd80a389c2a19eeada8d9f97d51239b731c537cbc7c" Jan 23 06:43:47 crc kubenswrapper[4784]: E0123 06:43:47.894313 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec5323f48b812b692dfaabd80a389c2a19eeada8d9f97d51239b731c537cbc7c\": container with ID starting with ec5323f48b812b692dfaabd80a389c2a19eeada8d9f97d51239b731c537cbc7c not found: ID does not exist" containerID="ec5323f48b812b692dfaabd80a389c2a19eeada8d9f97d51239b731c537cbc7c" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.894354 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec5323f48b812b692dfaabd80a389c2a19eeada8d9f97d51239b731c537cbc7c"} err="failed to get container status \"ec5323f48b812b692dfaabd80a389c2a19eeada8d9f97d51239b731c537cbc7c\": rpc error: code = NotFound desc = could not find container \"ec5323f48b812b692dfaabd80a389c2a19eeada8d9f97d51239b731c537cbc7c\": container with ID starting with ec5323f48b812b692dfaabd80a389c2a19eeada8d9f97d51239b731c537cbc7c not found: ID does not exist" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.894395 4784 scope.go:117] "RemoveContainer" containerID="8669082937e9493abab428f1b051a2f4bb1da7db547817840b6950cf8b812fda" Jan 23 06:43:47 crc kubenswrapper[4784]: E0123 06:43:47.896132 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8669082937e9493abab428f1b051a2f4bb1da7db547817840b6950cf8b812fda\": container with ID starting with 8669082937e9493abab428f1b051a2f4bb1da7db547817840b6950cf8b812fda not found: ID does not exist" containerID="8669082937e9493abab428f1b051a2f4bb1da7db547817840b6950cf8b812fda" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.896165 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8669082937e9493abab428f1b051a2f4bb1da7db547817840b6950cf8b812fda"} err="failed to get container status \"8669082937e9493abab428f1b051a2f4bb1da7db547817840b6950cf8b812fda\": rpc error: code = NotFound desc = could not find container \"8669082937e9493abab428f1b051a2f4bb1da7db547817840b6950cf8b812fda\": container with ID starting with 8669082937e9493abab428f1b051a2f4bb1da7db547817840b6950cf8b812fda not found: ID does not exist" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.932555 4784 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 23 06:43:47 crc kubenswrapper[4784]: I0123 06:43:47.984552 4784 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.066956 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.081662 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.095947 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:43:48 crc kubenswrapper[4784]: E0123 06:43:48.096953 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35725aa2-6c23-4676-a612-b169efb88e5b" containerName="mariadb-account-create-update" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.097044 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="35725aa2-6c23-4676-a612-b169efb88e5b" containerName="mariadb-account-create-update" Jan 23 06:43:48 crc kubenswrapper[4784]: E0123 06:43:48.097153 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c136c6-ca42-4080-ac37-582e3e86847f" containerName="mariadb-account-create-update" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.097219 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c136c6-ca42-4080-ac37-582e3e86847f" containerName="mariadb-account-create-update" Jan 23 06:43:48 crc kubenswrapper[4784]: E0123 06:43:48.097300 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c82e190-0062-4ebc-8ee5-74401deb567e" containerName="glance-log" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.097394 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c82e190-0062-4ebc-8ee5-74401deb567e" containerName="glance-log" Jan 23 06:43:48 crc kubenswrapper[4784]: E0123 06:43:48.097481 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384a5279-9005-4fd7-882e-e14349adfe06" containerName="mariadb-database-create" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.097549 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="384a5279-9005-4fd7-882e-e14349adfe06" containerName="mariadb-database-create" Jan 23 06:43:48 crc kubenswrapper[4784]: E0123 06:43:48.097633 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c82e190-0062-4ebc-8ee5-74401deb567e" containerName="glance-httpd" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.097700 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c82e190-0062-4ebc-8ee5-74401deb567e" containerName="glance-httpd" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.098096 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="35725aa2-6c23-4676-a612-b169efb88e5b" containerName="mariadb-account-create-update" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.098197 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c82e190-0062-4ebc-8ee5-74401deb567e" containerName="glance-httpd" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.098270 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="384a5279-9005-4fd7-882e-e14349adfe06" containerName="mariadb-database-create" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.098354 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c136c6-ca42-4080-ac37-582e3e86847f" containerName="mariadb-account-create-update" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.098430 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c82e190-0062-4ebc-8ee5-74401deb567e" containerName="glance-log" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.100017 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.108176 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.108640 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.116649 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.188437 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e78a8b-1136-4b2e-9d1d-20533086ea3e-config-data\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.188721 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e78a8b-1136-4b2e-9d1d-20533086ea3e-logs\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.188937 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01e78a8b-1136-4b2e-9d1d-20533086ea3e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.189034 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01e78a8b-1136-4b2e-9d1d-20533086ea3e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.189350 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.189442 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e78a8b-1136-4b2e-9d1d-20533086ea3e-scripts\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.189521 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e78a8b-1136-4b2e-9d1d-20533086ea3e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.189624 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg627\" (UniqueName: \"kubernetes.io/projected/01e78a8b-1136-4b2e-9d1d-20533086ea3e-kube-api-access-mg627\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.292186 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e78a8b-1136-4b2e-9d1d-20533086ea3e-scripts\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.292276 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e78a8b-1136-4b2e-9d1d-20533086ea3e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.292358 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg627\" (UniqueName: \"kubernetes.io/projected/01e78a8b-1136-4b2e-9d1d-20533086ea3e-kube-api-access-mg627\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.292433 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e78a8b-1136-4b2e-9d1d-20533086ea3e-config-data\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.292482 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e78a8b-1136-4b2e-9d1d-20533086ea3e-logs\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.292586 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01e78a8b-1136-4b2e-9d1d-20533086ea3e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.292607 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01e78a8b-1136-4b2e-9d1d-20533086ea3e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.292637 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.292853 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.295633 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01e78a8b-1136-4b2e-9d1d-20533086ea3e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.296089 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01e78a8b-1136-4b2e-9d1d-20533086ea3e-logs\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.306097 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01e78a8b-1136-4b2e-9d1d-20533086ea3e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.306591 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01e78a8b-1136-4b2e-9d1d-20533086ea3e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.306770 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01e78a8b-1136-4b2e-9d1d-20533086ea3e-config-data\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.309311 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01e78a8b-1136-4b2e-9d1d-20533086ea3e-scripts\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.324835 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg627\" (UniqueName: \"kubernetes.io/projected/01e78a8b-1136-4b2e-9d1d-20533086ea3e-kube-api-access-mg627\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.333999 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"01e78a8b-1136-4b2e-9d1d-20533086ea3e\") " pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.425028 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.748671 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21429e2a-c0f1-47fa-8a30-0577e1e9e72c","Type":"ContainerStarted","Data":"55da8fb40d48ea05ef6dfd2a2de0e27459318373f4a38f00360586d14f2d931c"} Jan 23 06:43:48 crc kubenswrapper[4784]: I0123 06:43:48.784344 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.783730722 podStartE2EDuration="4.783730722s" podCreationTimestamp="2026-01-23 06:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:43:48.775251224 +0000 UTC m=+1432.007759208" watchObservedRunningTime="2026-01-23 06:43:48.783730722 +0000 UTC m=+1432.016238696" Jan 23 06:43:49 crc kubenswrapper[4784]: W0123 06:43:49.171774 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01e78a8b_1136_4b2e_9d1d_20533086ea3e.slice/crio-37e936f4f65d8db0e2c2c89dc45cc964c7cf2af88a8e524904f5fc2213b806d0 WatchSource:0}: Error finding container 37e936f4f65d8db0e2c2c89dc45cc964c7cf2af88a8e524904f5fc2213b806d0: Status 404 returned error can't find the container with id 37e936f4f65d8db0e2c2c89dc45cc964c7cf2af88a8e524904f5fc2213b806d0 Jan 23 06:43:49 crc kubenswrapper[4784]: I0123 06:43:49.175884 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:43:49 crc kubenswrapper[4784]: I0123 06:43:49.268804 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c82e190-0062-4ebc-8ee5-74401deb567e" path="/var/lib/kubelet/pods/7c82e190-0062-4ebc-8ee5-74401deb567e/volumes" Jan 23 06:43:49 crc kubenswrapper[4784]: I0123 06:43:49.763793 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"01e78a8b-1136-4b2e-9d1d-20533086ea3e","Type":"ContainerStarted","Data":"37e936f4f65d8db0e2c2c89dc45cc964c7cf2af88a8e524904f5fc2213b806d0"} Jan 23 06:43:50 crc kubenswrapper[4784]: I0123 06:43:50.818021 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"01e78a8b-1136-4b2e-9d1d-20533086ea3e","Type":"ContainerStarted","Data":"bdec109c2401e4d501b0932f96b2a66d235aa7f39f2be06de8cc05a8b34d32ef"} Jan 23 06:43:51 crc kubenswrapper[4784]: I0123 06:43:51.786027 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xv9ck"] Jan 23 06:43:51 crc kubenswrapper[4784]: I0123 06:43:51.788254 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:43:51 crc kubenswrapper[4784]: I0123 06:43:51.791032 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 06:43:51 crc kubenswrapper[4784]: I0123 06:43:51.791953 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 23 06:43:51 crc kubenswrapper[4784]: I0123 06:43:51.796573 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8j4jf" Jan 23 06:43:51 crc kubenswrapper[4784]: I0123 06:43:51.798476 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xv9ck"] Jan 23 06:43:51 crc kubenswrapper[4784]: I0123 06:43:51.832957 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"01e78a8b-1136-4b2e-9d1d-20533086ea3e","Type":"ContainerStarted","Data":"dee39e127a91d7b952e11d6fca7744875aef3006e45d2b0319a264344dac7658"} Jan 23 06:43:51 crc kubenswrapper[4784]: I0123 06:43:51.896661 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.896632414 podStartE2EDuration="3.896632414s" podCreationTimestamp="2026-01-23 06:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:43:51.891519249 +0000 UTC m=+1435.124027233" watchObservedRunningTime="2026-01-23 06:43:51.896632414 +0000 UTC m=+1435.129140388" Jan 23 06:43:51 crc kubenswrapper[4784]: I0123 06:43:51.937413 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-scripts\") pod \"nova-cell0-conductor-db-sync-xv9ck\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:43:51 crc kubenswrapper[4784]: I0123 06:43:51.938092 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xv9ck\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:43:51 crc kubenswrapper[4784]: I0123 06:43:51.938118 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-config-data\") pod \"nova-cell0-conductor-db-sync-xv9ck\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:43:51 crc kubenswrapper[4784]: I0123 06:43:51.938222 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt74k\" (UniqueName: \"kubernetes.io/projected/27b495cf-9626-42ed-ad77-e58aadea9973-kube-api-access-qt74k\") pod \"nova-cell0-conductor-db-sync-xv9ck\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:43:51 crc kubenswrapper[4784]: I0123 06:43:51.939973 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l5m57"] Jan 23 06:43:51 crc kubenswrapper[4784]: I0123 06:43:51.942919 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:43:51 crc kubenswrapper[4784]: I0123 06:43:51.980282 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l5m57"] Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.040464 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt74k\" (UniqueName: \"kubernetes.io/projected/27b495cf-9626-42ed-ad77-e58aadea9973-kube-api-access-qt74k\") pod \"nova-cell0-conductor-db-sync-xv9ck\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.040573 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c94911b4-c61e-483a-bbf0-6e529adca249-utilities\") pod \"redhat-operators-l5m57\" (UID: \"c94911b4-c61e-483a-bbf0-6e529adca249\") " pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.040608 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c94911b4-c61e-483a-bbf0-6e529adca249-catalog-content\") pod \"redhat-operators-l5m57\" (UID: \"c94911b4-c61e-483a-bbf0-6e529adca249\") " pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.040643 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-scripts\") pod \"nova-cell0-conductor-db-sync-xv9ck\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.040699 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xv9ck\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.040722 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-config-data\") pod \"nova-cell0-conductor-db-sync-xv9ck\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.040796 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9gcq\" (UniqueName: \"kubernetes.io/projected/c94911b4-c61e-483a-bbf0-6e529adca249-kube-api-access-q9gcq\") pod \"redhat-operators-l5m57\" (UID: \"c94911b4-c61e-483a-bbf0-6e529adca249\") " pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.050218 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-config-data\") pod \"nova-cell0-conductor-db-sync-xv9ck\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.052451 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xv9ck\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.057395 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-scripts\") pod \"nova-cell0-conductor-db-sync-xv9ck\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.083487 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt74k\" (UniqueName: \"kubernetes.io/projected/27b495cf-9626-42ed-ad77-e58aadea9973-kube-api-access-qt74k\") pod \"nova-cell0-conductor-db-sync-xv9ck\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.115447 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.144506 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9gcq\" (UniqueName: \"kubernetes.io/projected/c94911b4-c61e-483a-bbf0-6e529adca249-kube-api-access-q9gcq\") pod \"redhat-operators-l5m57\" (UID: \"c94911b4-c61e-483a-bbf0-6e529adca249\") " pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.144680 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c94911b4-c61e-483a-bbf0-6e529adca249-utilities\") pod \"redhat-operators-l5m57\" (UID: \"c94911b4-c61e-483a-bbf0-6e529adca249\") " pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.144732 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c94911b4-c61e-483a-bbf0-6e529adca249-catalog-content\") pod \"redhat-operators-l5m57\" (UID: \"c94911b4-c61e-483a-bbf0-6e529adca249\") " pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.145548 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c94911b4-c61e-483a-bbf0-6e529adca249-catalog-content\") pod \"redhat-operators-l5m57\" (UID: \"c94911b4-c61e-483a-bbf0-6e529adca249\") " pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.146305 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c94911b4-c61e-483a-bbf0-6e529adca249-utilities\") pod \"redhat-operators-l5m57\" (UID: \"c94911b4-c61e-483a-bbf0-6e529adca249\") " pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.170095 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9gcq\" (UniqueName: \"kubernetes.io/projected/c94911b4-c61e-483a-bbf0-6e529adca249-kube-api-access-q9gcq\") pod \"redhat-operators-l5m57\" (UID: \"c94911b4-c61e-483a-bbf0-6e529adca249\") " pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.279892 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.746445 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xv9ck"] Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.855179 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xv9ck" event={"ID":"27b495cf-9626-42ed-ad77-e58aadea9973","Type":"ContainerStarted","Data":"257c4d4469a20850d079d0c436c1b2812462f533f8c8b9f5754ac87018685ef3"} Jan 23 06:43:52 crc kubenswrapper[4784]: I0123 06:43:52.916278 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l5m57"] Jan 23 06:43:52 crc kubenswrapper[4784]: W0123 06:43:52.922697 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc94911b4_c61e_483a_bbf0_6e529adca249.slice/crio-e432e8e48994464ec65b7cdafc57ae1f70df686ee9ccad5f0eacfae740bd7771 WatchSource:0}: Error finding container e432e8e48994464ec65b7cdafc57ae1f70df686ee9ccad5f0eacfae740bd7771: Status 404 returned error can't find the container with id e432e8e48994464ec65b7cdafc57ae1f70df686ee9ccad5f0eacfae740bd7771 Jan 23 06:43:53 crc kubenswrapper[4784]: I0123 06:43:53.879630 4784 generic.go:334] "Generic (PLEG): container finished" podID="c94911b4-c61e-483a-bbf0-6e529adca249" containerID="226e9ecfead5d0947bf08ed634790a6f922ba81f4ddfd9209cebe96e9cef3caa" exitCode=0 Jan 23 06:43:53 crc kubenswrapper[4784]: I0123 06:43:53.879839 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5m57" event={"ID":"c94911b4-c61e-483a-bbf0-6e529adca249","Type":"ContainerDied","Data":"226e9ecfead5d0947bf08ed634790a6f922ba81f4ddfd9209cebe96e9cef3caa"} Jan 23 06:43:53 crc kubenswrapper[4784]: I0123 06:43:53.880168 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5m57" event={"ID":"c94911b4-c61e-483a-bbf0-6e529adca249","Type":"ContainerStarted","Data":"e432e8e48994464ec65b7cdafc57ae1f70df686ee9ccad5f0eacfae740bd7771"} Jan 23 06:43:55 crc kubenswrapper[4784]: I0123 06:43:55.312118 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 06:43:55 crc kubenswrapper[4784]: I0123 06:43:55.312712 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 06:43:55 crc kubenswrapper[4784]: I0123 06:43:55.529439 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 06:43:55 crc kubenswrapper[4784]: I0123 06:43:55.545421 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 06:43:55 crc kubenswrapper[4784]: I0123 06:43:55.914664 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5m57" event={"ID":"c94911b4-c61e-483a-bbf0-6e529adca249","Type":"ContainerStarted","Data":"b9bf66d9aa35201ca0469e446d93a1b9d4c5d0592995d4b7109a0a941d42da4a"} Jan 23 06:43:55 crc kubenswrapper[4784]: I0123 06:43:55.915181 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 06:43:55 crc kubenswrapper[4784]: I0123 06:43:55.915381 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 06:43:56 crc kubenswrapper[4784]: I0123 06:43:56.958983 4784 generic.go:334] "Generic (PLEG): container finished" podID="c94911b4-c61e-483a-bbf0-6e529adca249" containerID="b9bf66d9aa35201ca0469e446d93a1b9d4c5d0592995d4b7109a0a941d42da4a" exitCode=0 Jan 23 06:43:56 crc kubenswrapper[4784]: I0123 06:43:56.959072 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5m57" event={"ID":"c94911b4-c61e-483a-bbf0-6e529adca249","Type":"ContainerDied","Data":"b9bf66d9aa35201ca0469e446d93a1b9d4c5d0592995d4b7109a0a941d42da4a"} Jan 23 06:43:58 crc kubenswrapper[4784]: I0123 06:43:58.425558 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 06:43:58 crc kubenswrapper[4784]: I0123 06:43:58.425626 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 06:43:58 crc kubenswrapper[4784]: I0123 06:43:58.492527 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 06:43:58 crc kubenswrapper[4784]: I0123 06:43:58.503174 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 06:43:58 crc kubenswrapper[4784]: I0123 06:43:58.503402 4784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:43:58 crc kubenswrapper[4784]: I0123 06:43:58.548169 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 06:43:58 crc kubenswrapper[4784]: I0123 06:43:58.988179 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 06:43:58 crc kubenswrapper[4784]: I0123 06:43:58.988805 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 06:43:59 crc kubenswrapper[4784]: I0123 06:43:59.007015 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 06:44:01 crc kubenswrapper[4784]: I0123 06:44:01.719864 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 06:44:01 crc kubenswrapper[4784]: I0123 06:44:01.720437 4784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:44:02 crc kubenswrapper[4784]: I0123 06:44:02.174617 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 06:44:04 crc kubenswrapper[4784]: I0123 06:44:04.313097 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:44:04 crc kubenswrapper[4784]: I0123 06:44:04.314032 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="8efcea72-3c4e-4458-8c0c-0e08a090b037" containerName="watcher-decision-engine" containerID="cri-o://7fe6192d7ae7aa3ee8930b98adda10683c494470230651883cdeb1e9e5d3cd4a" gracePeriod=30 Jan 23 06:44:04 crc kubenswrapper[4784]: I0123 06:44:04.638379 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 23 06:44:06 crc kubenswrapper[4784]: I0123 06:44:06.128342 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5m57" event={"ID":"c94911b4-c61e-483a-bbf0-6e529adca249","Type":"ContainerStarted","Data":"fbc8a577da711bfdce013a1ed5d147a17aabb98d714865bc40d4f5d944e0f7eb"} Jan 23 06:44:06 crc kubenswrapper[4784]: I0123 06:44:06.133500 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xv9ck" event={"ID":"27b495cf-9626-42ed-ad77-e58aadea9973","Type":"ContainerStarted","Data":"679700483f426dcd81199f44c303def09c81e5f9f8be5981ae78876a890280cd"} Jan 23 06:44:06 crc kubenswrapper[4784]: I0123 06:44:06.156775 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l5m57" podStartSLOduration=4.218289735 podStartE2EDuration="15.156719932s" podCreationTimestamp="2026-01-23 06:43:51 +0000 UTC" firstStartedPulling="2026-01-23 06:43:53.886820038 +0000 UTC m=+1437.119328012" lastFinishedPulling="2026-01-23 06:44:04.825250235 +0000 UTC m=+1448.057758209" observedRunningTime="2026-01-23 06:44:06.150619592 +0000 UTC m=+1449.383127556" watchObservedRunningTime="2026-01-23 06:44:06.156719932 +0000 UTC m=+1449.389227906" Jan 23 06:44:06 crc kubenswrapper[4784]: I0123 06:44:06.184850 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xv9ck" podStartSLOduration=3.090983504 podStartE2EDuration="15.184810734s" podCreationTimestamp="2026-01-23 06:43:51 +0000 UTC" firstStartedPulling="2026-01-23 06:43:52.758367298 +0000 UTC m=+1435.990875272" lastFinishedPulling="2026-01-23 06:44:04.852194528 +0000 UTC m=+1448.084702502" observedRunningTime="2026-01-23 06:44:06.171509897 +0000 UTC m=+1449.404017871" watchObservedRunningTime="2026-01-23 06:44:06.184810734 +0000 UTC m=+1449.417318708" Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.196109 4784 generic.go:334] "Generic (PLEG): container finished" podID="8efcea72-3c4e-4458-8c0c-0e08a090b037" containerID="7fe6192d7ae7aa3ee8930b98adda10683c494470230651883cdeb1e9e5d3cd4a" exitCode=0 Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.196223 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8efcea72-3c4e-4458-8c0c-0e08a090b037","Type":"ContainerDied","Data":"7fe6192d7ae7aa3ee8930b98adda10683c494470230651883cdeb1e9e5d3cd4a"} Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.196869 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8efcea72-3c4e-4458-8c0c-0e08a090b037","Type":"ContainerDied","Data":"8ccbddf59c8d8e74b04ada29869e4b1d0ea87cd27750cb3fde999c02d7fbc59e"} Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.196898 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ccbddf59c8d8e74b04ada29869e4b1d0ea87cd27750cb3fde999c02d7fbc59e" Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.196993 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.285999 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-config-data\") pod \"8efcea72-3c4e-4458-8c0c-0e08a090b037\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.286501 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8efcea72-3c4e-4458-8c0c-0e08a090b037-logs\") pod \"8efcea72-3c4e-4458-8c0c-0e08a090b037\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.286649 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-custom-prometheus-ca\") pod \"8efcea72-3c4e-4458-8c0c-0e08a090b037\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.286798 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-combined-ca-bundle\") pod \"8efcea72-3c4e-4458-8c0c-0e08a090b037\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.286933 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94svn\" (UniqueName: \"kubernetes.io/projected/8efcea72-3c4e-4458-8c0c-0e08a090b037-kube-api-access-94svn\") pod \"8efcea72-3c4e-4458-8c0c-0e08a090b037\" (UID: \"8efcea72-3c4e-4458-8c0c-0e08a090b037\") " Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.289852 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8efcea72-3c4e-4458-8c0c-0e08a090b037-logs" (OuterVolumeSpecName: "logs") pod "8efcea72-3c4e-4458-8c0c-0e08a090b037" (UID: "8efcea72-3c4e-4458-8c0c-0e08a090b037"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.337893 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8efcea72-3c4e-4458-8c0c-0e08a090b037" (UID: "8efcea72-3c4e-4458-8c0c-0e08a090b037"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.338370 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8efcea72-3c4e-4458-8c0c-0e08a090b037" (UID: "8efcea72-3c4e-4458-8c0c-0e08a090b037"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.340293 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8efcea72-3c4e-4458-8c0c-0e08a090b037-kube-api-access-94svn" (OuterVolumeSpecName: "kube-api-access-94svn") pod "8efcea72-3c4e-4458-8c0c-0e08a090b037" (UID: "8efcea72-3c4e-4458-8c0c-0e08a090b037"). InnerVolumeSpecName "kube-api-access-94svn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.391879 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8efcea72-3c4e-4458-8c0c-0e08a090b037-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.391925 4784 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.391945 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.391962 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94svn\" (UniqueName: \"kubernetes.io/projected/8efcea72-3c4e-4458-8c0c-0e08a090b037-kube-api-access-94svn\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.398972 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-config-data" (OuterVolumeSpecName: "config-data") pod "8efcea72-3c4e-4458-8c0c-0e08a090b037" (UID: "8efcea72-3c4e-4458-8c0c-0e08a090b037"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:10 crc kubenswrapper[4784]: I0123 06:44:10.493858 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8efcea72-3c4e-4458-8c0c-0e08a090b037-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.206589 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.252528 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.268996 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.289567 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:44:11 crc kubenswrapper[4784]: E0123 06:44:11.292590 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8efcea72-3c4e-4458-8c0c-0e08a090b037" containerName="watcher-decision-engine" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.292625 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8efcea72-3c4e-4458-8c0c-0e08a090b037" containerName="watcher-decision-engine" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.292911 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="8efcea72-3c4e-4458-8c0c-0e08a090b037" containerName="watcher-decision-engine" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.293850 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.301519 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.316742 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.416451 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9d8x\" (UniqueName: \"kubernetes.io/projected/5394d5ac-2fa5-4720-9b3e-b392db36e106-kube-api-access-n9d8x\") pod \"watcher-decision-engine-0\" (UID: \"5394d5ac-2fa5-4720-9b3e-b392db36e106\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.416520 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5394d5ac-2fa5-4720-9b3e-b392db36e106-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"5394d5ac-2fa5-4720-9b3e-b392db36e106\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.416913 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5394d5ac-2fa5-4720-9b3e-b392db36e106-logs\") pod \"watcher-decision-engine-0\" (UID: \"5394d5ac-2fa5-4720-9b3e-b392db36e106\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.417216 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5394d5ac-2fa5-4720-9b3e-b392db36e106-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"5394d5ac-2fa5-4720-9b3e-b392db36e106\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.417714 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5394d5ac-2fa5-4720-9b3e-b392db36e106-config-data\") pod \"watcher-decision-engine-0\" (UID: \"5394d5ac-2fa5-4720-9b3e-b392db36e106\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.520982 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5394d5ac-2fa5-4720-9b3e-b392db36e106-logs\") pod \"watcher-decision-engine-0\" (UID: \"5394d5ac-2fa5-4720-9b3e-b392db36e106\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.521085 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5394d5ac-2fa5-4720-9b3e-b392db36e106-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"5394d5ac-2fa5-4720-9b3e-b392db36e106\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.521153 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5394d5ac-2fa5-4720-9b3e-b392db36e106-config-data\") pod \"watcher-decision-engine-0\" (UID: \"5394d5ac-2fa5-4720-9b3e-b392db36e106\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.521294 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9d8x\" (UniqueName: \"kubernetes.io/projected/5394d5ac-2fa5-4720-9b3e-b392db36e106-kube-api-access-n9d8x\") pod \"watcher-decision-engine-0\" (UID: \"5394d5ac-2fa5-4720-9b3e-b392db36e106\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.521346 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5394d5ac-2fa5-4720-9b3e-b392db36e106-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"5394d5ac-2fa5-4720-9b3e-b392db36e106\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.521709 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5394d5ac-2fa5-4720-9b3e-b392db36e106-logs\") pod \"watcher-decision-engine-0\" (UID: \"5394d5ac-2fa5-4720-9b3e-b392db36e106\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.529178 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5394d5ac-2fa5-4720-9b3e-b392db36e106-config-data\") pod \"watcher-decision-engine-0\" (UID: \"5394d5ac-2fa5-4720-9b3e-b392db36e106\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.530336 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5394d5ac-2fa5-4720-9b3e-b392db36e106-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"5394d5ac-2fa5-4720-9b3e-b392db36e106\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.531462 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5394d5ac-2fa5-4720-9b3e-b392db36e106-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"5394d5ac-2fa5-4720-9b3e-b392db36e106\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.544458 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9d8x\" (UniqueName: \"kubernetes.io/projected/5394d5ac-2fa5-4720-9b3e-b392db36e106-kube-api-access-n9d8x\") pod \"watcher-decision-engine-0\" (UID: \"5394d5ac-2fa5-4720-9b3e-b392db36e106\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:44:11 crc kubenswrapper[4784]: I0123 06:44:11.623816 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:44:12 crc kubenswrapper[4784]: I0123 06:44:12.149443 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:44:12 crc kubenswrapper[4784]: I0123 06:44:12.220534 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"5394d5ac-2fa5-4720-9b3e-b392db36e106","Type":"ContainerStarted","Data":"1d3daccc3c54f180a930983ac34798d475f2abc81b475e95a76bf3b0df4938bb"} Jan 23 06:44:12 crc kubenswrapper[4784]: I0123 06:44:12.281579 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:44:12 crc kubenswrapper[4784]: I0123 06:44:12.283420 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:44:12 crc kubenswrapper[4784]: I0123 06:44:12.352200 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:44:13 crc kubenswrapper[4784]: I0123 06:44:13.282871 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8efcea72-3c4e-4458-8c0c-0e08a090b037" path="/var/lib/kubelet/pods/8efcea72-3c4e-4458-8c0c-0e08a090b037/volumes" Jan 23 06:44:13 crc kubenswrapper[4784]: I0123 06:44:13.298366 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:44:13 crc kubenswrapper[4784]: I0123 06:44:13.354943 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l5m57"] Jan 23 06:44:14 crc kubenswrapper[4784]: I0123 06:44:14.244074 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"5394d5ac-2fa5-4720-9b3e-b392db36e106","Type":"ContainerStarted","Data":"f20f183479733014129f8c489c3bcce020bfc0bb3294d8c65dfeb5f20a27083f"} Jan 23 06:44:14 crc kubenswrapper[4784]: I0123 06:44:14.267563 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.26749609 podStartE2EDuration="3.26749609s" podCreationTimestamp="2026-01-23 06:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:44:14.266800723 +0000 UTC m=+1457.499308707" watchObservedRunningTime="2026-01-23 06:44:14.26749609 +0000 UTC m=+1457.500004084" Jan 23 06:44:14 crc kubenswrapper[4784]: I0123 06:44:14.961558 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.021509 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-combined-ca-bundle\") pod \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.021680 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-sg-core-conf-yaml\") pod \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.021718 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-scripts\") pod \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.021777 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03014e29-2486-4bde-9c21-8f7b8dac7b3c-run-httpd\") pod \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.021847 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03014e29-2486-4bde-9c21-8f7b8dac7b3c-log-httpd\") pod \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.022953 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03014e29-2486-4bde-9c21-8f7b8dac7b3c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "03014e29-2486-4bde-9c21-8f7b8dac7b3c" (UID: "03014e29-2486-4bde-9c21-8f7b8dac7b3c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.021952 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q9pp\" (UniqueName: \"kubernetes.io/projected/03014e29-2486-4bde-9c21-8f7b8dac7b3c-kube-api-access-8q9pp\") pod \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.022975 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03014e29-2486-4bde-9c21-8f7b8dac7b3c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "03014e29-2486-4bde-9c21-8f7b8dac7b3c" (UID: "03014e29-2486-4bde-9c21-8f7b8dac7b3c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.023092 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-config-data\") pod \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\" (UID: \"03014e29-2486-4bde-9c21-8f7b8dac7b3c\") " Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.024302 4784 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03014e29-2486-4bde-9c21-8f7b8dac7b3c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.024329 4784 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03014e29-2486-4bde-9c21-8f7b8dac7b3c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.029059 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03014e29-2486-4bde-9c21-8f7b8dac7b3c-kube-api-access-8q9pp" (OuterVolumeSpecName: "kube-api-access-8q9pp") pod "03014e29-2486-4bde-9c21-8f7b8dac7b3c" (UID: "03014e29-2486-4bde-9c21-8f7b8dac7b3c"). InnerVolumeSpecName "kube-api-access-8q9pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.043819 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-scripts" (OuterVolumeSpecName: "scripts") pod "03014e29-2486-4bde-9c21-8f7b8dac7b3c" (UID: "03014e29-2486-4bde-9c21-8f7b8dac7b3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.061531 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "03014e29-2486-4bde-9c21-8f7b8dac7b3c" (UID: "03014e29-2486-4bde-9c21-8f7b8dac7b3c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.120983 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03014e29-2486-4bde-9c21-8f7b8dac7b3c" (UID: "03014e29-2486-4bde-9c21-8f7b8dac7b3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.126634 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q9pp\" (UniqueName: \"kubernetes.io/projected/03014e29-2486-4bde-9c21-8f7b8dac7b3c-kube-api-access-8q9pp\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.126676 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.126962 4784 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.130020 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.140293 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-config-data" (OuterVolumeSpecName: "config-data") pod "03014e29-2486-4bde-9c21-8f7b8dac7b3c" (UID: "03014e29-2486-4bde-9c21-8f7b8dac7b3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.232100 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03014e29-2486-4bde-9c21-8f7b8dac7b3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.262958 4784 generic.go:334] "Generic (PLEG): container finished" podID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerID="79a05d75f8ec4e4611a09229621e1d9ab330bf7ed02d9596e05e3b36255398b6" exitCode=137 Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.263086 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.263270 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l5m57" podUID="c94911b4-c61e-483a-bbf0-6e529adca249" containerName="registry-server" containerID="cri-o://fbc8a577da711bfdce013a1ed5d147a17aabb98d714865bc40d4f5d944e0f7eb" gracePeriod=2 Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.269676 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03014e29-2486-4bde-9c21-8f7b8dac7b3c","Type":"ContainerDied","Data":"79a05d75f8ec4e4611a09229621e1d9ab330bf7ed02d9596e05e3b36255398b6"} Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.269741 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03014e29-2486-4bde-9c21-8f7b8dac7b3c","Type":"ContainerDied","Data":"ef09e160fae4d65a6d76954f3ae1ea773c36802d7f284b39cfad95f132f6fc37"} Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.270949 4784 scope.go:117] "RemoveContainer" containerID="79a05d75f8ec4e4611a09229621e1d9ab330bf7ed02d9596e05e3b36255398b6" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.341210 4784 scope.go:117] "RemoveContainer" containerID="3c355d0c43121e621d33a3f0e53cb1cc1cd269e82c7cc706afacbb54bb42d479" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.342030 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.359102 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.373810 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:44:15 crc kubenswrapper[4784]: E0123 06:44:15.374494 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="sg-core" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.374520 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="sg-core" Jan 23 06:44:15 crc kubenswrapper[4784]: E0123 06:44:15.374537 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="ceilometer-central-agent" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.374544 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="ceilometer-central-agent" Jan 23 06:44:15 crc kubenswrapper[4784]: E0123 06:44:15.374578 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="ceilometer-notification-agent" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.374585 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="ceilometer-notification-agent" Jan 23 06:44:15 crc kubenswrapper[4784]: E0123 06:44:15.374612 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="proxy-httpd" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.374619 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="proxy-httpd" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.374851 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="ceilometer-notification-agent" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.374869 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="sg-core" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.374882 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="ceilometer-central-agent" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.374894 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" containerName="proxy-httpd" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.377330 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.382116 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.383339 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.394938 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.400412 4784 scope.go:117] "RemoveContainer" containerID="e0c1dd93d9182f839fac6c73ac59ef5a9a8477a6f6a13f241979f49c03d94218" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.444339 4784 scope.go:117] "RemoveContainer" containerID="def42cf20ce45bb89f7c3beab9aa1f9243be5159886924eb9852a4fbd1703a44" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.474701 4784 scope.go:117] "RemoveContainer" containerID="79a05d75f8ec4e4611a09229621e1d9ab330bf7ed02d9596e05e3b36255398b6" Jan 23 06:44:15 crc kubenswrapper[4784]: E0123 06:44:15.478964 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79a05d75f8ec4e4611a09229621e1d9ab330bf7ed02d9596e05e3b36255398b6\": container with ID starting with 79a05d75f8ec4e4611a09229621e1d9ab330bf7ed02d9596e05e3b36255398b6 not found: ID does not exist" containerID="79a05d75f8ec4e4611a09229621e1d9ab330bf7ed02d9596e05e3b36255398b6" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.479046 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79a05d75f8ec4e4611a09229621e1d9ab330bf7ed02d9596e05e3b36255398b6"} err="failed to get container status \"79a05d75f8ec4e4611a09229621e1d9ab330bf7ed02d9596e05e3b36255398b6\": rpc error: code = NotFound desc = could not find container \"79a05d75f8ec4e4611a09229621e1d9ab330bf7ed02d9596e05e3b36255398b6\": container with ID starting with 79a05d75f8ec4e4611a09229621e1d9ab330bf7ed02d9596e05e3b36255398b6 not found: ID does not exist" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.479094 4784 scope.go:117] "RemoveContainer" containerID="3c355d0c43121e621d33a3f0e53cb1cc1cd269e82c7cc706afacbb54bb42d479" Jan 23 06:44:15 crc kubenswrapper[4784]: E0123 06:44:15.482860 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c355d0c43121e621d33a3f0e53cb1cc1cd269e82c7cc706afacbb54bb42d479\": container with ID starting with 3c355d0c43121e621d33a3f0e53cb1cc1cd269e82c7cc706afacbb54bb42d479 not found: ID does not exist" containerID="3c355d0c43121e621d33a3f0e53cb1cc1cd269e82c7cc706afacbb54bb42d479" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.482900 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c355d0c43121e621d33a3f0e53cb1cc1cd269e82c7cc706afacbb54bb42d479"} err="failed to get container status \"3c355d0c43121e621d33a3f0e53cb1cc1cd269e82c7cc706afacbb54bb42d479\": rpc error: code = NotFound desc = could not find container \"3c355d0c43121e621d33a3f0e53cb1cc1cd269e82c7cc706afacbb54bb42d479\": container with ID starting with 3c355d0c43121e621d33a3f0e53cb1cc1cd269e82c7cc706afacbb54bb42d479 not found: ID does not exist" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.482920 4784 scope.go:117] "RemoveContainer" containerID="e0c1dd93d9182f839fac6c73ac59ef5a9a8477a6f6a13f241979f49c03d94218" Jan 23 06:44:15 crc kubenswrapper[4784]: E0123 06:44:15.483295 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0c1dd93d9182f839fac6c73ac59ef5a9a8477a6f6a13f241979f49c03d94218\": container with ID starting with e0c1dd93d9182f839fac6c73ac59ef5a9a8477a6f6a13f241979f49c03d94218 not found: ID does not exist" containerID="e0c1dd93d9182f839fac6c73ac59ef5a9a8477a6f6a13f241979f49c03d94218" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.483351 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c1dd93d9182f839fac6c73ac59ef5a9a8477a6f6a13f241979f49c03d94218"} err="failed to get container status \"e0c1dd93d9182f839fac6c73ac59ef5a9a8477a6f6a13f241979f49c03d94218\": rpc error: code = NotFound desc = could not find container \"e0c1dd93d9182f839fac6c73ac59ef5a9a8477a6f6a13f241979f49c03d94218\": container with ID starting with e0c1dd93d9182f839fac6c73ac59ef5a9a8477a6f6a13f241979f49c03d94218 not found: ID does not exist" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.483385 4784 scope.go:117] "RemoveContainer" containerID="def42cf20ce45bb89f7c3beab9aa1f9243be5159886924eb9852a4fbd1703a44" Jan 23 06:44:15 crc kubenswrapper[4784]: E0123 06:44:15.483805 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"def42cf20ce45bb89f7c3beab9aa1f9243be5159886924eb9852a4fbd1703a44\": container with ID starting with def42cf20ce45bb89f7c3beab9aa1f9243be5159886924eb9852a4fbd1703a44 not found: ID does not exist" containerID="def42cf20ce45bb89f7c3beab9aa1f9243be5159886924eb9852a4fbd1703a44" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.483868 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"def42cf20ce45bb89f7c3beab9aa1f9243be5159886924eb9852a4fbd1703a44"} err="failed to get container status \"def42cf20ce45bb89f7c3beab9aa1f9243be5159886924eb9852a4fbd1703a44\": rpc error: code = NotFound desc = could not find container \"def42cf20ce45bb89f7c3beab9aa1f9243be5159886924eb9852a4fbd1703a44\": container with ID starting with def42cf20ce45bb89f7c3beab9aa1f9243be5159886924eb9852a4fbd1703a44 not found: ID does not exist" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.541576 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-log-httpd\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.541671 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-scripts\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.541718 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-run-httpd\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.541816 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.541844 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-config-data\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.541877 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7wwr\" (UniqueName: \"kubernetes.io/projected/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-kube-api-access-f7wwr\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.541911 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.643860 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-log-httpd\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.643944 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-scripts\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.643980 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-run-httpd\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.644059 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.644082 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-config-data\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.644109 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7wwr\" (UniqueName: \"kubernetes.io/projected/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-kube-api-access-f7wwr\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.644147 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.644660 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-log-httpd\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.646009 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-run-httpd\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.654485 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.655022 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.655232 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-scripts\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.655420 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-config-data\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.666906 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7wwr\" (UniqueName: \"kubernetes.io/projected/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-kube-api-access-f7wwr\") pod \"ceilometer-0\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.723299 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.830437 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.951038 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9gcq\" (UniqueName: \"kubernetes.io/projected/c94911b4-c61e-483a-bbf0-6e529adca249-kube-api-access-q9gcq\") pod \"c94911b4-c61e-483a-bbf0-6e529adca249\" (UID: \"c94911b4-c61e-483a-bbf0-6e529adca249\") " Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.951273 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c94911b4-c61e-483a-bbf0-6e529adca249-utilities\") pod \"c94911b4-c61e-483a-bbf0-6e529adca249\" (UID: \"c94911b4-c61e-483a-bbf0-6e529adca249\") " Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.951606 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c94911b4-c61e-483a-bbf0-6e529adca249-catalog-content\") pod \"c94911b4-c61e-483a-bbf0-6e529adca249\" (UID: \"c94911b4-c61e-483a-bbf0-6e529adca249\") " Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.952561 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c94911b4-c61e-483a-bbf0-6e529adca249-utilities" (OuterVolumeSpecName: "utilities") pod "c94911b4-c61e-483a-bbf0-6e529adca249" (UID: "c94911b4-c61e-483a-bbf0-6e529adca249"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.953860 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c94911b4-c61e-483a-bbf0-6e529adca249-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:15 crc kubenswrapper[4784]: I0123 06:44:15.959589 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c94911b4-c61e-483a-bbf0-6e529adca249-kube-api-access-q9gcq" (OuterVolumeSpecName: "kube-api-access-q9gcq") pod "c94911b4-c61e-483a-bbf0-6e529adca249" (UID: "c94911b4-c61e-483a-bbf0-6e529adca249"). InnerVolumeSpecName "kube-api-access-q9gcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.056740 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9gcq\" (UniqueName: \"kubernetes.io/projected/c94911b4-c61e-483a-bbf0-6e529adca249-kube-api-access-q9gcq\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.119845 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c94911b4-c61e-483a-bbf0-6e529adca249-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c94911b4-c61e-483a-bbf0-6e529adca249" (UID: "c94911b4-c61e-483a-bbf0-6e529adca249"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.159521 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c94911b4-c61e-483a-bbf0-6e529adca249-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.267598 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:44:16 crc kubenswrapper[4784]: W0123 06:44:16.276896 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc38f5f43_48d6_4f0d_9fe2_d4e96f6d110b.slice/crio-edd5e423f04b1429a09b1285d76812433d7c8304772bfb3b68f3085ed3ed9326 WatchSource:0}: Error finding container edd5e423f04b1429a09b1285d76812433d7c8304772bfb3b68f3085ed3ed9326: Status 404 returned error can't find the container with id edd5e423f04b1429a09b1285d76812433d7c8304772bfb3b68f3085ed3ed9326 Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.278232 4784 generic.go:334] "Generic (PLEG): container finished" podID="c94911b4-c61e-483a-bbf0-6e529adca249" containerID="fbc8a577da711bfdce013a1ed5d147a17aabb98d714865bc40d4f5d944e0f7eb" exitCode=0 Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.278324 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5m57" event={"ID":"c94911b4-c61e-483a-bbf0-6e529adca249","Type":"ContainerDied","Data":"fbc8a577da711bfdce013a1ed5d147a17aabb98d714865bc40d4f5d944e0f7eb"} Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.278429 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5m57" event={"ID":"c94911b4-c61e-483a-bbf0-6e529adca249","Type":"ContainerDied","Data":"e432e8e48994464ec65b7cdafc57ae1f70df686ee9ccad5f0eacfae740bd7771"} Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.278305 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l5m57" Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.278458 4784 scope.go:117] "RemoveContainer" containerID="fbc8a577da711bfdce013a1ed5d147a17aabb98d714865bc40d4f5d944e0f7eb" Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.311079 4784 scope.go:117] "RemoveContainer" containerID="b9bf66d9aa35201ca0469e446d93a1b9d4c5d0592995d4b7109a0a941d42da4a" Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.340070 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l5m57"] Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.350864 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l5m57"] Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.361301 4784 scope.go:117] "RemoveContainer" containerID="226e9ecfead5d0947bf08ed634790a6f922ba81f4ddfd9209cebe96e9cef3caa" Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.392296 4784 scope.go:117] "RemoveContainer" containerID="fbc8a577da711bfdce013a1ed5d147a17aabb98d714865bc40d4f5d944e0f7eb" Jan 23 06:44:16 crc kubenswrapper[4784]: E0123 06:44:16.393122 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc8a577da711bfdce013a1ed5d147a17aabb98d714865bc40d4f5d944e0f7eb\": container with ID starting with fbc8a577da711bfdce013a1ed5d147a17aabb98d714865bc40d4f5d944e0f7eb not found: ID does not exist" containerID="fbc8a577da711bfdce013a1ed5d147a17aabb98d714865bc40d4f5d944e0f7eb" Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.393202 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc8a577da711bfdce013a1ed5d147a17aabb98d714865bc40d4f5d944e0f7eb"} err="failed to get container status \"fbc8a577da711bfdce013a1ed5d147a17aabb98d714865bc40d4f5d944e0f7eb\": rpc error: code = NotFound desc = could not find container \"fbc8a577da711bfdce013a1ed5d147a17aabb98d714865bc40d4f5d944e0f7eb\": container with ID starting with fbc8a577da711bfdce013a1ed5d147a17aabb98d714865bc40d4f5d944e0f7eb not found: ID does not exist" Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.393242 4784 scope.go:117] "RemoveContainer" containerID="b9bf66d9aa35201ca0469e446d93a1b9d4c5d0592995d4b7109a0a941d42da4a" Jan 23 06:44:16 crc kubenswrapper[4784]: E0123 06:44:16.393609 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9bf66d9aa35201ca0469e446d93a1b9d4c5d0592995d4b7109a0a941d42da4a\": container with ID starting with b9bf66d9aa35201ca0469e446d93a1b9d4c5d0592995d4b7109a0a941d42da4a not found: ID does not exist" containerID="b9bf66d9aa35201ca0469e446d93a1b9d4c5d0592995d4b7109a0a941d42da4a" Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.393640 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9bf66d9aa35201ca0469e446d93a1b9d4c5d0592995d4b7109a0a941d42da4a"} err="failed to get container status \"b9bf66d9aa35201ca0469e446d93a1b9d4c5d0592995d4b7109a0a941d42da4a\": rpc error: code = NotFound desc = could not find container \"b9bf66d9aa35201ca0469e446d93a1b9d4c5d0592995d4b7109a0a941d42da4a\": container with ID starting with b9bf66d9aa35201ca0469e446d93a1b9d4c5d0592995d4b7109a0a941d42da4a not found: ID does not exist" Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.393662 4784 scope.go:117] "RemoveContainer" containerID="226e9ecfead5d0947bf08ed634790a6f922ba81f4ddfd9209cebe96e9cef3caa" Jan 23 06:44:16 crc kubenswrapper[4784]: E0123 06:44:16.393943 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"226e9ecfead5d0947bf08ed634790a6f922ba81f4ddfd9209cebe96e9cef3caa\": container with ID starting with 226e9ecfead5d0947bf08ed634790a6f922ba81f4ddfd9209cebe96e9cef3caa not found: ID does not exist" containerID="226e9ecfead5d0947bf08ed634790a6f922ba81f4ddfd9209cebe96e9cef3caa" Jan 23 06:44:16 crc kubenswrapper[4784]: I0123 06:44:16.393973 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"226e9ecfead5d0947bf08ed634790a6f922ba81f4ddfd9209cebe96e9cef3caa"} err="failed to get container status \"226e9ecfead5d0947bf08ed634790a6f922ba81f4ddfd9209cebe96e9cef3caa\": rpc error: code = NotFound desc = could not find container \"226e9ecfead5d0947bf08ed634790a6f922ba81f4ddfd9209cebe96e9cef3caa\": container with ID starting with 226e9ecfead5d0947bf08ed634790a6f922ba81f4ddfd9209cebe96e9cef3caa not found: ID does not exist" Jan 23 06:44:17 crc kubenswrapper[4784]: I0123 06:44:17.271258 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03014e29-2486-4bde-9c21-8f7b8dac7b3c" path="/var/lib/kubelet/pods/03014e29-2486-4bde-9c21-8f7b8dac7b3c/volumes" Jan 23 06:44:17 crc kubenswrapper[4784]: I0123 06:44:17.272889 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c94911b4-c61e-483a-bbf0-6e529adca249" path="/var/lib/kubelet/pods/c94911b4-c61e-483a-bbf0-6e529adca249/volumes" Jan 23 06:44:17 crc kubenswrapper[4784]: I0123 06:44:17.322393 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b","Type":"ContainerStarted","Data":"5a952e7324a81c93e9cefbd1f4a38f383178536f60cf7e3b5df7b837381c81be"} Jan 23 06:44:17 crc kubenswrapper[4784]: I0123 06:44:17.322464 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b","Type":"ContainerStarted","Data":"edd5e423f04b1429a09b1285d76812433d7c8304772bfb3b68f3085ed3ed9326"} Jan 23 06:44:18 crc kubenswrapper[4784]: I0123 06:44:18.339085 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b","Type":"ContainerStarted","Data":"9bfa72bee22a9d6c4fd22368df977aac43b31fde8ea0c63432235e11fc7429b2"} Jan 23 06:44:19 crc kubenswrapper[4784]: I0123 06:44:19.363915 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b","Type":"ContainerStarted","Data":"d809cb460d0d5db2aff6fc2c10becca1b9c15fd725d091cc7c98f5fa5a233a92"} Jan 23 06:44:20 crc kubenswrapper[4784]: I0123 06:44:20.377808 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b","Type":"ContainerStarted","Data":"304ec502ceb28278968c20422060b052ff3c6fb4a565066e6f898b6f65205f74"} Jan 23 06:44:20 crc kubenswrapper[4784]: I0123 06:44:20.378785 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 06:44:20 crc kubenswrapper[4784]: I0123 06:44:20.381202 4784 generic.go:334] "Generic (PLEG): container finished" podID="27b495cf-9626-42ed-ad77-e58aadea9973" containerID="679700483f426dcd81199f44c303def09c81e5f9f8be5981ae78876a890280cd" exitCode=0 Jan 23 06:44:20 crc kubenswrapper[4784]: I0123 06:44:20.381241 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xv9ck" event={"ID":"27b495cf-9626-42ed-ad77-e58aadea9973","Type":"ContainerDied","Data":"679700483f426dcd81199f44c303def09c81e5f9f8be5981ae78876a890280cd"} Jan 23 06:44:20 crc kubenswrapper[4784]: I0123 06:44:20.404313 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7299315499999999 podStartE2EDuration="5.404285533s" podCreationTimestamp="2026-01-23 06:44:15 +0000 UTC" firstStartedPulling="2026-01-23 06:44:16.279356367 +0000 UTC m=+1459.511864341" lastFinishedPulling="2026-01-23 06:44:19.95371035 +0000 UTC m=+1463.186218324" observedRunningTime="2026-01-23 06:44:20.403570214 +0000 UTC m=+1463.636078198" watchObservedRunningTime="2026-01-23 06:44:20.404285533 +0000 UTC m=+1463.636793507" Jan 23 06:44:21 crc kubenswrapper[4784]: E0123 06:44:21.576152 4784 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Jan 23 06:44:21 crc kubenswrapper[4784]: I0123 06:44:21.624192 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 06:44:21 crc kubenswrapper[4784]: I0123 06:44:21.686251 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 23 06:44:21 crc kubenswrapper[4784]: I0123 06:44:21.824011 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:44:21 crc kubenswrapper[4784]: I0123 06:44:21.998639 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt74k\" (UniqueName: \"kubernetes.io/projected/27b495cf-9626-42ed-ad77-e58aadea9973-kube-api-access-qt74k\") pod \"27b495cf-9626-42ed-ad77-e58aadea9973\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " Jan 23 06:44:21 crc kubenswrapper[4784]: I0123 06:44:21.999257 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-scripts\") pod \"27b495cf-9626-42ed-ad77-e58aadea9973\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " Jan 23 06:44:21 crc kubenswrapper[4784]: I0123 06:44:21.999562 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-config-data\") pod \"27b495cf-9626-42ed-ad77-e58aadea9973\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " Jan 23 06:44:21 crc kubenswrapper[4784]: I0123 06:44:21.999807 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-combined-ca-bundle\") pod \"27b495cf-9626-42ed-ad77-e58aadea9973\" (UID: \"27b495cf-9626-42ed-ad77-e58aadea9973\") " Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.021735 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-scripts" (OuterVolumeSpecName: "scripts") pod "27b495cf-9626-42ed-ad77-e58aadea9973" (UID: "27b495cf-9626-42ed-ad77-e58aadea9973"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.025722 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27b495cf-9626-42ed-ad77-e58aadea9973-kube-api-access-qt74k" (OuterVolumeSpecName: "kube-api-access-qt74k") pod "27b495cf-9626-42ed-ad77-e58aadea9973" (UID: "27b495cf-9626-42ed-ad77-e58aadea9973"). InnerVolumeSpecName "kube-api-access-qt74k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.035434 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-config-data" (OuterVolumeSpecName: "config-data") pod "27b495cf-9626-42ed-ad77-e58aadea9973" (UID: "27b495cf-9626-42ed-ad77-e58aadea9973"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.037710 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27b495cf-9626-42ed-ad77-e58aadea9973" (UID: "27b495cf-9626-42ed-ad77-e58aadea9973"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.103846 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt74k\" (UniqueName: \"kubernetes.io/projected/27b495cf-9626-42ed-ad77-e58aadea9973-kube-api-access-qt74k\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.103907 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.103923 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.103938 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b495cf-9626-42ed-ad77-e58aadea9973-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.420234 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xv9ck" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.421134 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xv9ck" event={"ID":"27b495cf-9626-42ed-ad77-e58aadea9973","Type":"ContainerDied","Data":"257c4d4469a20850d079d0c436c1b2812462f533f8c8b9f5754ac87018685ef3"} Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.421209 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="257c4d4469a20850d079d0c436c1b2812462f533f8c8b9f5754ac87018685ef3" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.421448 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.471009 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.562033 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 06:44:22 crc kubenswrapper[4784]: E0123 06:44:22.562565 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c94911b4-c61e-483a-bbf0-6e529adca249" containerName="registry-server" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.562586 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c94911b4-c61e-483a-bbf0-6e529adca249" containerName="registry-server" Jan 23 06:44:22 crc kubenswrapper[4784]: E0123 06:44:22.562615 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27b495cf-9626-42ed-ad77-e58aadea9973" containerName="nova-cell0-conductor-db-sync" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.562623 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="27b495cf-9626-42ed-ad77-e58aadea9973" containerName="nova-cell0-conductor-db-sync" Jan 23 06:44:22 crc kubenswrapper[4784]: E0123 06:44:22.562647 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c94911b4-c61e-483a-bbf0-6e529adca249" containerName="extract-utilities" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.562654 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c94911b4-c61e-483a-bbf0-6e529adca249" containerName="extract-utilities" Jan 23 06:44:22 crc kubenswrapper[4784]: E0123 06:44:22.562679 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c94911b4-c61e-483a-bbf0-6e529adca249" containerName="extract-content" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.562685 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c94911b4-c61e-483a-bbf0-6e529adca249" containerName="extract-content" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.562902 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="27b495cf-9626-42ed-ad77-e58aadea9973" containerName="nova-cell0-conductor-db-sync" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.562917 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c94911b4-c61e-483a-bbf0-6e529adca249" containerName="registry-server" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.563721 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.570075 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8j4jf" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.570397 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.595826 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.624889 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.625194 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svvm4\" (UniqueName: \"kubernetes.io/projected/49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba-kube-api-access-svvm4\") pod \"nova-cell0-conductor-0\" (UID: \"49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.625346 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.727280 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.727419 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svvm4\" (UniqueName: \"kubernetes.io/projected/49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba-kube-api-access-svvm4\") pod \"nova-cell0-conductor-0\" (UID: \"49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.727468 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.734405 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.737621 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.770633 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svvm4\" (UniqueName: \"kubernetes.io/projected/49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba-kube-api-access-svvm4\") pod \"nova-cell0-conductor-0\" (UID: \"49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:44:22 crc kubenswrapper[4784]: I0123 06:44:22.885915 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 06:44:23 crc kubenswrapper[4784]: I0123 06:44:23.454784 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 06:44:23 crc kubenswrapper[4784]: W0123 06:44:23.456429 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49ca1bfc_7758_4d5d_85f5_1ffd4ee430ba.slice/crio-488ef370717a770fdaca2d0100830b3194e80fd54771ca2b7f1c9bd416d4f0ec WatchSource:0}: Error finding container 488ef370717a770fdaca2d0100830b3194e80fd54771ca2b7f1c9bd416d4f0ec: Status 404 returned error can't find the container with id 488ef370717a770fdaca2d0100830b3194e80fd54771ca2b7f1c9bd416d4f0ec Jan 23 06:44:24 crc kubenswrapper[4784]: I0123 06:44:24.401397 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:44:24 crc kubenswrapper[4784]: I0123 06:44:24.402538 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="ceilometer-central-agent" containerID="cri-o://5a952e7324a81c93e9cefbd1f4a38f383178536f60cf7e3b5df7b837381c81be" gracePeriod=30 Jan 23 06:44:24 crc kubenswrapper[4784]: I0123 06:44:24.402598 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="sg-core" containerID="cri-o://d809cb460d0d5db2aff6fc2c10becca1b9c15fd725d091cc7c98f5fa5a233a92" gracePeriod=30 Jan 23 06:44:24 crc kubenswrapper[4784]: I0123 06:44:24.402679 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="proxy-httpd" containerID="cri-o://304ec502ceb28278968c20422060b052ff3c6fb4a565066e6f898b6f65205f74" gracePeriod=30 Jan 23 06:44:24 crc kubenswrapper[4784]: I0123 06:44:24.402714 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="ceilometer-notification-agent" containerID="cri-o://9bfa72bee22a9d6c4fd22368df977aac43b31fde8ea0c63432235e11fc7429b2" gracePeriod=30 Jan 23 06:44:24 crc kubenswrapper[4784]: I0123 06:44:24.446430 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba","Type":"ContainerStarted","Data":"f24abd6d692fdfaba1606aba1ca8d29ba4b8354821b76b30523b6e98525b92c1"} Jan 23 06:44:24 crc kubenswrapper[4784]: I0123 06:44:24.446501 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba","Type":"ContainerStarted","Data":"488ef370717a770fdaca2d0100830b3194e80fd54771ca2b7f1c9bd416d4f0ec"} Jan 23 06:44:24 crc kubenswrapper[4784]: I0123 06:44:24.474203 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.474178763 podStartE2EDuration="2.474178763s" podCreationTimestamp="2026-01-23 06:44:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:44:24.471990639 +0000 UTC m=+1467.704498613" watchObservedRunningTime="2026-01-23 06:44:24.474178763 +0000 UTC m=+1467.706686737" Jan 23 06:44:25 crc kubenswrapper[4784]: I0123 06:44:25.460106 4784 generic.go:334] "Generic (PLEG): container finished" podID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerID="304ec502ceb28278968c20422060b052ff3c6fb4a565066e6f898b6f65205f74" exitCode=0 Jan 23 06:44:25 crc kubenswrapper[4784]: I0123 06:44:25.461893 4784 generic.go:334] "Generic (PLEG): container finished" podID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerID="d809cb460d0d5db2aff6fc2c10becca1b9c15fd725d091cc7c98f5fa5a233a92" exitCode=2 Jan 23 06:44:25 crc kubenswrapper[4784]: I0123 06:44:25.461955 4784 generic.go:334] "Generic (PLEG): container finished" podID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerID="9bfa72bee22a9d6c4fd22368df977aac43b31fde8ea0c63432235e11fc7429b2" exitCode=0 Jan 23 06:44:25 crc kubenswrapper[4784]: I0123 06:44:25.460201 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b","Type":"ContainerDied","Data":"304ec502ceb28278968c20422060b052ff3c6fb4a565066e6f898b6f65205f74"} Jan 23 06:44:25 crc kubenswrapper[4784]: I0123 06:44:25.462154 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b","Type":"ContainerDied","Data":"d809cb460d0d5db2aff6fc2c10becca1b9c15fd725d091cc7c98f5fa5a233a92"} Jan 23 06:44:25 crc kubenswrapper[4784]: I0123 06:44:25.462203 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b","Type":"ContainerDied","Data":"9bfa72bee22a9d6c4fd22368df977aac43b31fde8ea0c63432235e11fc7429b2"} Jan 23 06:44:25 crc kubenswrapper[4784]: I0123 06:44:25.462242 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 23 06:44:28 crc kubenswrapper[4784]: I0123 06:44:28.950904 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.124385 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-log-httpd\") pod \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.124481 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-config-data\") pod \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.124512 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7wwr\" (UniqueName: \"kubernetes.io/projected/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-kube-api-access-f7wwr\") pod \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.124821 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-combined-ca-bundle\") pod \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.124880 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-sg-core-conf-yaml\") pod \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.124912 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-scripts\") pod \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.125009 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-run-httpd\") pod \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\" (UID: \"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b\") " Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.125492 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" (UID: "c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.125580 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" (UID: "c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.126487 4784 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.126511 4784 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.146088 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-scripts" (OuterVolumeSpecName: "scripts") pod "c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" (UID: "c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.184403 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-kube-api-access-f7wwr" (OuterVolumeSpecName: "kube-api-access-f7wwr") pod "c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" (UID: "c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b"). InnerVolumeSpecName "kube-api-access-f7wwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.208082 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" (UID: "c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.229037 4784 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.229087 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.229098 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7wwr\" (UniqueName: \"kubernetes.io/projected/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-kube-api-access-f7wwr\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.276666 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" (UID: "c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.298905 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-config-data" (OuterVolumeSpecName: "config-data") pod "c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" (UID: "c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.331233 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.331298 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.507441 4784 generic.go:334] "Generic (PLEG): container finished" podID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerID="5a952e7324a81c93e9cefbd1f4a38f383178536f60cf7e3b5df7b837381c81be" exitCode=0 Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.507507 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b","Type":"ContainerDied","Data":"5a952e7324a81c93e9cefbd1f4a38f383178536f60cf7e3b5df7b837381c81be"} Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.507561 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b","Type":"ContainerDied","Data":"edd5e423f04b1429a09b1285d76812433d7c8304772bfb3b68f3085ed3ed9326"} Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.507586 4784 scope.go:117] "RemoveContainer" containerID="304ec502ceb28278968c20422060b052ff3c6fb4a565066e6f898b6f65205f74" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.507670 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.598236 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.602805 4784 scope.go:117] "RemoveContainer" containerID="d809cb460d0d5db2aff6fc2c10becca1b9c15fd725d091cc7c98f5fa5a233a92" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.626601 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.653144 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:44:29 crc kubenswrapper[4784]: E0123 06:44:29.653991 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="ceilometer-central-agent" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.654040 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="ceilometer-central-agent" Jan 23 06:44:29 crc kubenswrapper[4784]: E0123 06:44:29.654066 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="sg-core" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.654076 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="sg-core" Jan 23 06:44:29 crc kubenswrapper[4784]: E0123 06:44:29.654114 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="proxy-httpd" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.654122 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="proxy-httpd" Jan 23 06:44:29 crc kubenswrapper[4784]: E0123 06:44:29.654151 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="ceilometer-notification-agent" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.654159 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="ceilometer-notification-agent" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.654478 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="sg-core" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.654498 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="ceilometer-notification-agent" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.654512 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="proxy-httpd" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.654554 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" containerName="ceilometer-central-agent" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.660643 4784 scope.go:117] "RemoveContainer" containerID="9bfa72bee22a9d6c4fd22368df977aac43b31fde8ea0c63432235e11fc7429b2" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.677135 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.677475 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.681928 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.681928 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.690005 4784 scope.go:117] "RemoveContainer" containerID="5a952e7324a81c93e9cefbd1f4a38f383178536f60cf7e3b5df7b837381c81be" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.722302 4784 scope.go:117] "RemoveContainer" containerID="304ec502ceb28278968c20422060b052ff3c6fb4a565066e6f898b6f65205f74" Jan 23 06:44:29 crc kubenswrapper[4784]: E0123 06:44:29.722974 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"304ec502ceb28278968c20422060b052ff3c6fb4a565066e6f898b6f65205f74\": container with ID starting with 304ec502ceb28278968c20422060b052ff3c6fb4a565066e6f898b6f65205f74 not found: ID does not exist" containerID="304ec502ceb28278968c20422060b052ff3c6fb4a565066e6f898b6f65205f74" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.723018 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"304ec502ceb28278968c20422060b052ff3c6fb4a565066e6f898b6f65205f74"} err="failed to get container status \"304ec502ceb28278968c20422060b052ff3c6fb4a565066e6f898b6f65205f74\": rpc error: code = NotFound desc = could not find container \"304ec502ceb28278968c20422060b052ff3c6fb4a565066e6f898b6f65205f74\": container with ID starting with 304ec502ceb28278968c20422060b052ff3c6fb4a565066e6f898b6f65205f74 not found: ID does not exist" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.723049 4784 scope.go:117] "RemoveContainer" containerID="d809cb460d0d5db2aff6fc2c10becca1b9c15fd725d091cc7c98f5fa5a233a92" Jan 23 06:44:29 crc kubenswrapper[4784]: E0123 06:44:29.723627 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d809cb460d0d5db2aff6fc2c10becca1b9c15fd725d091cc7c98f5fa5a233a92\": container with ID starting with d809cb460d0d5db2aff6fc2c10becca1b9c15fd725d091cc7c98f5fa5a233a92 not found: ID does not exist" containerID="d809cb460d0d5db2aff6fc2c10becca1b9c15fd725d091cc7c98f5fa5a233a92" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.723817 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d809cb460d0d5db2aff6fc2c10becca1b9c15fd725d091cc7c98f5fa5a233a92"} err="failed to get container status \"d809cb460d0d5db2aff6fc2c10becca1b9c15fd725d091cc7c98f5fa5a233a92\": rpc error: code = NotFound desc = could not find container \"d809cb460d0d5db2aff6fc2c10becca1b9c15fd725d091cc7c98f5fa5a233a92\": container with ID starting with d809cb460d0d5db2aff6fc2c10becca1b9c15fd725d091cc7c98f5fa5a233a92 not found: ID does not exist" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.724016 4784 scope.go:117] "RemoveContainer" containerID="9bfa72bee22a9d6c4fd22368df977aac43b31fde8ea0c63432235e11fc7429b2" Jan 23 06:44:29 crc kubenswrapper[4784]: E0123 06:44:29.724474 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bfa72bee22a9d6c4fd22368df977aac43b31fde8ea0c63432235e11fc7429b2\": container with ID starting with 9bfa72bee22a9d6c4fd22368df977aac43b31fde8ea0c63432235e11fc7429b2 not found: ID does not exist" containerID="9bfa72bee22a9d6c4fd22368df977aac43b31fde8ea0c63432235e11fc7429b2" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.724537 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bfa72bee22a9d6c4fd22368df977aac43b31fde8ea0c63432235e11fc7429b2"} err="failed to get container status \"9bfa72bee22a9d6c4fd22368df977aac43b31fde8ea0c63432235e11fc7429b2\": rpc error: code = NotFound desc = could not find container \"9bfa72bee22a9d6c4fd22368df977aac43b31fde8ea0c63432235e11fc7429b2\": container with ID starting with 9bfa72bee22a9d6c4fd22368df977aac43b31fde8ea0c63432235e11fc7429b2 not found: ID does not exist" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.724668 4784 scope.go:117] "RemoveContainer" containerID="5a952e7324a81c93e9cefbd1f4a38f383178536f60cf7e3b5df7b837381c81be" Jan 23 06:44:29 crc kubenswrapper[4784]: E0123 06:44:29.725203 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a952e7324a81c93e9cefbd1f4a38f383178536f60cf7e3b5df7b837381c81be\": container with ID starting with 5a952e7324a81c93e9cefbd1f4a38f383178536f60cf7e3b5df7b837381c81be not found: ID does not exist" containerID="5a952e7324a81c93e9cefbd1f4a38f383178536f60cf7e3b5df7b837381c81be" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.725264 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a952e7324a81c93e9cefbd1f4a38f383178536f60cf7e3b5df7b837381c81be"} err="failed to get container status \"5a952e7324a81c93e9cefbd1f4a38f383178536f60cf7e3b5df7b837381c81be\": rpc error: code = NotFound desc = could not find container \"5a952e7324a81c93e9cefbd1f4a38f383178536f60cf7e3b5df7b837381c81be\": container with ID starting with 5a952e7324a81c93e9cefbd1f4a38f383178536f60cf7e3b5df7b837381c81be not found: ID does not exist" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.849803 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5166b5-66c2-4450-933f-c66331343200-run-httpd\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.850487 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.850623 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.850777 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5166b5-66c2-4450-933f-c66331343200-log-httpd\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.850898 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgb6g\" (UniqueName: \"kubernetes.io/projected/2d5166b5-66c2-4450-933f-c66331343200-kube-api-access-dgb6g\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.851024 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-scripts\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.851292 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-config-data\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.953497 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.953878 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.954710 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5166b5-66c2-4450-933f-c66331343200-log-httpd\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.954876 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgb6g\" (UniqueName: \"kubernetes.io/projected/2d5166b5-66c2-4450-933f-c66331343200-kube-api-access-dgb6g\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.955010 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-scripts\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.955180 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-config-data\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.955342 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5166b5-66c2-4450-933f-c66331343200-run-httpd\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.955473 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5166b5-66c2-4450-933f-c66331343200-log-httpd\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.956676 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5166b5-66c2-4450-933f-c66331343200-run-httpd\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.962309 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.962366 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-scripts\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.962813 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-config-data\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.967909 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:29 crc kubenswrapper[4784]: I0123 06:44:29.978798 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgb6g\" (UniqueName: \"kubernetes.io/projected/2d5166b5-66c2-4450-933f-c66331343200-kube-api-access-dgb6g\") pod \"ceilometer-0\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " pod="openstack/ceilometer-0" Jan 23 06:44:30 crc kubenswrapper[4784]: I0123 06:44:30.002843 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:44:30 crc kubenswrapper[4784]: I0123 06:44:30.519970 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:44:31 crc kubenswrapper[4784]: I0123 06:44:31.268772 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b" path="/var/lib/kubelet/pods/c38f5f43-48d6-4f0d-9fe2-d4e96f6d110b/volumes" Jan 23 06:44:31 crc kubenswrapper[4784]: I0123 06:44:31.536594 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5166b5-66c2-4450-933f-c66331343200","Type":"ContainerStarted","Data":"5ddf7d3d2dffe8825eee3d441e6cae1d0e455fff705315494a8e412f89ce4736"} Jan 23 06:44:31 crc kubenswrapper[4784]: I0123 06:44:31.537215 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5166b5-66c2-4450-933f-c66331343200","Type":"ContainerStarted","Data":"322bfa14852b742da9f2f1728359adb9d9be0d1fbdcc98bfdfb5d1b91725e13c"} Jan 23 06:44:32 crc kubenswrapper[4784]: I0123 06:44:32.553378 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5166b5-66c2-4450-933f-c66331343200","Type":"ContainerStarted","Data":"5f55d7ab420982ca51cb56f93ff9137975fed8432a56e468e127f2464be6ca47"} Jan 23 06:44:32 crc kubenswrapper[4784]: I0123 06:44:32.921600 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.473534 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-dtt27"] Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.475845 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.479420 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.479671 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.490612 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dtt27"] Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.566262 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5166b5-66c2-4450-933f-c66331343200","Type":"ContainerStarted","Data":"2f5aeaf3479941ead773ffced2ab64a3f88e90846903b9220163a292d732add7"} Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.647711 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-scripts\") pod \"nova-cell0-cell-mapping-dtt27\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.647861 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dcg7\" (UniqueName: \"kubernetes.io/projected/c9104535-ee58-4cc4-ac36-18a922118bed-kube-api-access-2dcg7\") pod \"nova-cell0-cell-mapping-dtt27\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.649133 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-config-data\") pod \"nova-cell0-cell-mapping-dtt27\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.649183 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dtt27\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.724780 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.726446 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.733817 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.756105 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-scripts\") pod \"nova-cell0-cell-mapping-dtt27\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.756187 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dcg7\" (UniqueName: \"kubernetes.io/projected/c9104535-ee58-4cc4-ac36-18a922118bed-kube-api-access-2dcg7\") pod \"nova-cell0-cell-mapping-dtt27\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.756246 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-config-data\") pod \"nova-cell0-cell-mapping-dtt27\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.756269 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dtt27\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.773782 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-scripts\") pod \"nova-cell0-cell-mapping-dtt27\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.775644 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dtt27\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.783542 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.788102 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-config-data\") pod \"nova-cell0-cell-mapping-dtt27\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.805838 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dcg7\" (UniqueName: \"kubernetes.io/projected/c9104535-ee58-4cc4-ac36-18a922118bed-kube-api-access-2dcg7\") pod \"nova-cell0-cell-mapping-dtt27\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.859589 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e7c2a29-4715-4c8d-80ac-a2a476a537af-config-data\") pod \"nova-scheduler-0\" (UID: \"8e7c2a29-4715-4c8d-80ac-a2a476a537af\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.859786 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62cn5\" (UniqueName: \"kubernetes.io/projected/8e7c2a29-4715-4c8d-80ac-a2a476a537af-kube-api-access-62cn5\") pod \"nova-scheduler-0\" (UID: \"8e7c2a29-4715-4c8d-80ac-a2a476a537af\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.859824 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e7c2a29-4715-4c8d-80ac-a2a476a537af-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8e7c2a29-4715-4c8d-80ac-a2a476a537af\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.893265 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.922487 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.943433 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.946838 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.948637 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.953645 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.961896 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e7c2a29-4715-4c8d-80ac-a2a476a537af-config-data\") pod \"nova-scheduler-0\" (UID: \"8e7c2a29-4715-4c8d-80ac-a2a476a537af\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.962123 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62cn5\" (UniqueName: \"kubernetes.io/projected/8e7c2a29-4715-4c8d-80ac-a2a476a537af-kube-api-access-62cn5\") pod \"nova-scheduler-0\" (UID: \"8e7c2a29-4715-4c8d-80ac-a2a476a537af\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.962167 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e7c2a29-4715-4c8d-80ac-a2a476a537af-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8e7c2a29-4715-4c8d-80ac-a2a476a537af\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.975601 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e7c2a29-4715-4c8d-80ac-a2a476a537af-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8e7c2a29-4715-4c8d-80ac-a2a476a537af\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:33 crc kubenswrapper[4784]: I0123 06:44:33.978112 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e7c2a29-4715-4c8d-80ac-a2a476a537af-config-data\") pod \"nova-scheduler-0\" (UID: \"8e7c2a29-4715-4c8d-80ac-a2a476a537af\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.020984 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.027528 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62cn5\" (UniqueName: \"kubernetes.io/projected/8e7c2a29-4715-4c8d-80ac-a2a476a537af-kube-api-access-62cn5\") pod \"nova-scheduler-0\" (UID: \"8e7c2a29-4715-4c8d-80ac-a2a476a537af\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.068060 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rplkd\" (UniqueName: \"kubernetes.io/projected/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-kube-api-access-rplkd\") pod \"nova-api-0\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " pod="openstack/nova-api-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.068135 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-logs\") pod \"nova-api-0\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " pod="openstack/nova-api-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.068193 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed163b1-1994-463b-a9c7-90ce7e097713-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ed163b1-1994-463b-a9c7-90ce7e097713\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.068284 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " pod="openstack/nova-api-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.068320 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x4l2\" (UniqueName: \"kubernetes.io/projected/0ed163b1-1994-463b-a9c7-90ce7e097713-kube-api-access-5x4l2\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ed163b1-1994-463b-a9c7-90ce7e097713\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.068370 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-config-data\") pod \"nova-api-0\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " pod="openstack/nova-api-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.068394 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed163b1-1994-463b-a9c7-90ce7e097713-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ed163b1-1994-463b-a9c7-90ce7e097713\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.091106 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.115436 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.137553 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.139783 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.153956 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.174181 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " pod="openstack/nova-api-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.174242 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x4l2\" (UniqueName: \"kubernetes.io/projected/0ed163b1-1994-463b-a9c7-90ce7e097713-kube-api-access-5x4l2\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ed163b1-1994-463b-a9c7-90ce7e097713\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.174300 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-config-data\") pod \"nova-api-0\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " pod="openstack/nova-api-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.174325 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed163b1-1994-463b-a9c7-90ce7e097713-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ed163b1-1994-463b-a9c7-90ce7e097713\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.174374 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rplkd\" (UniqueName: \"kubernetes.io/projected/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-kube-api-access-rplkd\") pod \"nova-api-0\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " pod="openstack/nova-api-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.174409 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-logs\") pod \"nova-api-0\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " pod="openstack/nova-api-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.174445 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed163b1-1994-463b-a9c7-90ce7e097713-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ed163b1-1994-463b-a9c7-90ce7e097713\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.179837 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-logs\") pod \"nova-api-0\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " pod="openstack/nova-api-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.201434 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-config-data\") pod \"nova-api-0\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " pod="openstack/nova-api-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.201547 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " pod="openstack/nova-api-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.201605 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed163b1-1994-463b-a9c7-90ce7e097713-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ed163b1-1994-463b-a9c7-90ce7e097713\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.201786 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed163b1-1994-463b-a9c7-90ce7e097713-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ed163b1-1994-463b-a9c7-90ce7e097713\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.218428 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rplkd\" (UniqueName: \"kubernetes.io/projected/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-kube-api-access-rplkd\") pod \"nova-api-0\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " pod="openstack/nova-api-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.229565 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x4l2\" (UniqueName: \"kubernetes.io/projected/0ed163b1-1994-463b-a9c7-90ce7e097713-kube-api-access-5x4l2\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ed163b1-1994-463b-a9c7-90ce7e097713\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.230097 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.237281 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.278980 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f57947-a8f3-4250-8d6b-9197d5a293b2-config-data\") pod \"nova-metadata-0\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " pod="openstack/nova-metadata-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.281034 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv7gt\" (UniqueName: \"kubernetes.io/projected/e1f57947-a8f3-4250-8d6b-9197d5a293b2-kube-api-access-qv7gt\") pod \"nova-metadata-0\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " pod="openstack/nova-metadata-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.281979 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f57947-a8f3-4250-8d6b-9197d5a293b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " pod="openstack/nova-metadata-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.282286 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1f57947-a8f3-4250-8d6b-9197d5a293b2-logs\") pod \"nova-metadata-0\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " pod="openstack/nova-metadata-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.299431 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-khwgk"] Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.302973 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.331996 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-khwgk"] Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.386075 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv7gt\" (UniqueName: \"kubernetes.io/projected/e1f57947-a8f3-4250-8d6b-9197d5a293b2-kube-api-access-qv7gt\") pod \"nova-metadata-0\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " pod="openstack/nova-metadata-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.386648 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f57947-a8f3-4250-8d6b-9197d5a293b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " pod="openstack/nova-metadata-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.386709 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.386776 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.386800 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1f57947-a8f3-4250-8d6b-9197d5a293b2-logs\") pod \"nova-metadata-0\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " pod="openstack/nova-metadata-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.386853 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-config\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.386893 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q7nx\" (UniqueName: \"kubernetes.io/projected/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-kube-api-access-8q7nx\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.386969 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-dns-svc\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.387004 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.387313 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f57947-a8f3-4250-8d6b-9197d5a293b2-config-data\") pod \"nova-metadata-0\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " pod="openstack/nova-metadata-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.389708 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1f57947-a8f3-4250-8d6b-9197d5a293b2-logs\") pod \"nova-metadata-0\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " pod="openstack/nova-metadata-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.400447 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f57947-a8f3-4250-8d6b-9197d5a293b2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " pod="openstack/nova-metadata-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.403314 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f57947-a8f3-4250-8d6b-9197d5a293b2-config-data\") pod \"nova-metadata-0\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " pod="openstack/nova-metadata-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.419519 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv7gt\" (UniqueName: \"kubernetes.io/projected/e1f57947-a8f3-4250-8d6b-9197d5a293b2-kube-api-access-qv7gt\") pod \"nova-metadata-0\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " pod="openstack/nova-metadata-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.445804 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.489431 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-config\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.489514 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q7nx\" (UniqueName: \"kubernetes.io/projected/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-kube-api-access-8q7nx\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.489580 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-dns-svc\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.489612 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.489723 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.489767 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.491069 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.491087 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-dns-svc\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.491421 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-config\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.491966 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.492623 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.515456 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.524772 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q7nx\" (UniqueName: \"kubernetes.io/projected/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-kube-api-access-8q7nx\") pod \"dnsmasq-dns-757b4f8459-khwgk\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.536312 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.642317 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:34 crc kubenswrapper[4784]: I0123 06:44:34.905341 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dtt27"] Jan 23 06:44:34 crc kubenswrapper[4784]: W0123 06:44:34.922251 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9104535_ee58_4cc4_ac36_18a922118bed.slice/crio-3ff363a39d0ed9851fa80cdb0d4ebf33366b04d756f58d926696b8666b9491fe WatchSource:0}: Error finding container 3ff363a39d0ed9851fa80cdb0d4ebf33366b04d756f58d926696b8666b9491fe: Status 404 returned error can't find the container with id 3ff363a39d0ed9851fa80cdb0d4ebf33366b04d756f58d926696b8666b9491fe Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.090204 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.444292 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.646933 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a","Type":"ContainerStarted","Data":"f1e6633007e4cabf4e81a9f46516369af4fa1579251d732a3e9addceffb0d3ad"} Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.658581 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8e7c2a29-4715-4c8d-80ac-a2a476a537af","Type":"ContainerStarted","Data":"5f058bfe89262290336e72e702c061d7733741e9e484ce267c90acfa9a13e66e"} Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.683512 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.721728 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dtt27" event={"ID":"c9104535-ee58-4cc4-ac36-18a922118bed","Type":"ContainerStarted","Data":"3ff363a39d0ed9851fa80cdb0d4ebf33366b04d756f58d926696b8666b9491fe"} Jan 23 06:44:35 crc kubenswrapper[4784]: W0123 06:44:35.729933 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ed163b1_1994_463b_a9c7_90ce7e097713.slice/crio-4815907179e6d3e3b7ae99f123e078a5dc3585b23c37626f30dbce10202c3f94 WatchSource:0}: Error finding container 4815907179e6d3e3b7ae99f123e078a5dc3585b23c37626f30dbce10202c3f94: Status 404 returned error can't find the container with id 4815907179e6d3e3b7ae99f123e078a5dc3585b23c37626f30dbce10202c3f94 Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.780216 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.892129 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t4pnl"] Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.895306 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.898208 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.903737 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.905191 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t4pnl"] Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.989685 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-khwgk"] Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.994577 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t4pnl\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.994710 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-config-data\") pod \"nova-cell1-conductor-db-sync-t4pnl\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.994742 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-scripts\") pod \"nova-cell1-conductor-db-sync-t4pnl\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:35 crc kubenswrapper[4784]: I0123 06:44:35.994799 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdmnp\" (UniqueName: \"kubernetes.io/projected/95c9045d-accf-4fe6-b22a-1b9cee39a56c-kube-api-access-gdmnp\") pod \"nova-cell1-conductor-db-sync-t4pnl\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.104216 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdmnp\" (UniqueName: \"kubernetes.io/projected/95c9045d-accf-4fe6-b22a-1b9cee39a56c-kube-api-access-gdmnp\") pod \"nova-cell1-conductor-db-sync-t4pnl\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.104843 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t4pnl\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.105187 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-config-data\") pod \"nova-cell1-conductor-db-sync-t4pnl\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.105262 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-scripts\") pod \"nova-cell1-conductor-db-sync-t4pnl\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.114544 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t4pnl\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.116033 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-scripts\") pod \"nova-cell1-conductor-db-sync-t4pnl\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.151565 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-config-data\") pod \"nova-cell1-conductor-db-sync-t4pnl\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.154085 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdmnp\" (UniqueName: \"kubernetes.io/projected/95c9045d-accf-4fe6-b22a-1b9cee39a56c-kube-api-access-gdmnp\") pod \"nova-cell1-conductor-db-sync-t4pnl\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.168177 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.743074 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ed163b1-1994-463b-a9c7-90ce7e097713","Type":"ContainerStarted","Data":"4815907179e6d3e3b7ae99f123e078a5dc3585b23c37626f30dbce10202c3f94"} Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.749973 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dtt27" event={"ID":"c9104535-ee58-4cc4-ac36-18a922118bed","Type":"ContainerStarted","Data":"b621f79d732e8d839f37db0483f5411a10f308b98c40d2b8ee777e82fd03805f"} Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.751423 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1f57947-a8f3-4250-8d6b-9197d5a293b2","Type":"ContainerStarted","Data":"671d74b6bf8cf93e9e16c2c59f316020b7e08dc12c8a361ea215645eb177aebc"} Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.753429 4784 generic.go:334] "Generic (PLEG): container finished" podID="b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f" containerID="a96176ead09888f7a36bbc745013577a5b1f91eb5881d5fb1421903eafd90a4c" exitCode=0 Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.753511 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-khwgk" event={"ID":"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f","Type":"ContainerDied","Data":"a96176ead09888f7a36bbc745013577a5b1f91eb5881d5fb1421903eafd90a4c"} Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.753544 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-khwgk" event={"ID":"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f","Type":"ContainerStarted","Data":"05a01f5847673092a7d281c01215f3e703682de54768e9321971097c527870d2"} Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.766190 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5166b5-66c2-4450-933f-c66331343200","Type":"ContainerStarted","Data":"92ac8b56ff04f43f9d573109c7f87ad05183bd964a308d6a7ef760f2eba5290c"} Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.766659 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.789680 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-dtt27" podStartSLOduration=3.78962809 podStartE2EDuration="3.78962809s" podCreationTimestamp="2026-01-23 06:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:44:36.78354996 +0000 UTC m=+1480.016057934" watchObservedRunningTime="2026-01-23 06:44:36.78962809 +0000 UTC m=+1480.022136064" Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.828823 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.55401087 podStartE2EDuration="7.828783824s" podCreationTimestamp="2026-01-23 06:44:29 +0000 UTC" firstStartedPulling="2026-01-23 06:44:30.525734978 +0000 UTC m=+1473.758242942" lastFinishedPulling="2026-01-23 06:44:34.800507922 +0000 UTC m=+1478.033015896" observedRunningTime="2026-01-23 06:44:36.816139703 +0000 UTC m=+1480.048647677" watchObservedRunningTime="2026-01-23 06:44:36.828783824 +0000 UTC m=+1480.061291798" Jan 23 06:44:36 crc kubenswrapper[4784]: I0123 06:44:36.909588 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t4pnl"] Jan 23 06:44:37 crc kubenswrapper[4784]: I0123 06:44:37.808340 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-khwgk" event={"ID":"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f","Type":"ContainerStarted","Data":"0638cf18446280008cc5bb8414a9b26ad74c1172183eaeb1bf6a52b9c0e85e65"} Jan 23 06:44:37 crc kubenswrapper[4784]: I0123 06:44:37.809248 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:37 crc kubenswrapper[4784]: I0123 06:44:37.815347 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t4pnl" event={"ID":"95c9045d-accf-4fe6-b22a-1b9cee39a56c","Type":"ContainerStarted","Data":"9b526ad1247934d3f7b8cb407807dd71a97aa2df829c42d4a18f7c173c29cf56"} Jan 23 06:44:37 crc kubenswrapper[4784]: I0123 06:44:37.815413 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t4pnl" event={"ID":"95c9045d-accf-4fe6-b22a-1b9cee39a56c","Type":"ContainerStarted","Data":"f38b679fdcbf4932739a1c915a73b0a74bb06dccc57e5af7a10bdf9e62822193"} Jan 23 06:44:37 crc kubenswrapper[4784]: I0123 06:44:37.836001 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-khwgk" podStartSLOduration=3.835973918 podStartE2EDuration="3.835973918s" podCreationTimestamp="2026-01-23 06:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:44:37.831204511 +0000 UTC m=+1481.063712485" watchObservedRunningTime="2026-01-23 06:44:37.835973918 +0000 UTC m=+1481.068481892" Jan 23 06:44:37 crc kubenswrapper[4784]: I0123 06:44:37.863221 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-t4pnl" podStartSLOduration=2.863190738 podStartE2EDuration="2.863190738s" podCreationTimestamp="2026-01-23 06:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:44:37.851983742 +0000 UTC m=+1481.084491716" watchObservedRunningTime="2026-01-23 06:44:37.863190738 +0000 UTC m=+1481.095698712" Jan 23 06:44:38 crc kubenswrapper[4784]: I0123 06:44:38.365179 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:44:38 crc kubenswrapper[4784]: I0123 06:44:38.671769 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:44:41 crc kubenswrapper[4784]: I0123 06:44:41.883730 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ed163b1-1994-463b-a9c7-90ce7e097713","Type":"ContainerStarted","Data":"57bc54fd5f50df24b6bcf0537135868ac1dfb7f465f709dada495e033b7f278f"} Jan 23 06:44:41 crc kubenswrapper[4784]: I0123 06:44:41.883943 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="0ed163b1-1994-463b-a9c7-90ce7e097713" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://57bc54fd5f50df24b6bcf0537135868ac1dfb7f465f709dada495e033b7f278f" gracePeriod=30 Jan 23 06:44:41 crc kubenswrapper[4784]: I0123 06:44:41.888488 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1f57947-a8f3-4250-8d6b-9197d5a293b2","Type":"ContainerStarted","Data":"5e926b8bd80471188814e5a1400c0c8285188f77b62eddf099065fdb13eac7c3"} Jan 23 06:44:41 crc kubenswrapper[4784]: I0123 06:44:41.888581 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1f57947-a8f3-4250-8d6b-9197d5a293b2","Type":"ContainerStarted","Data":"43f888398afecd79d9cf153ba1f77691ec6f544e6978783ea54a915703bfa839"} Jan 23 06:44:41 crc kubenswrapper[4784]: I0123 06:44:41.888627 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e1f57947-a8f3-4250-8d6b-9197d5a293b2" containerName="nova-metadata-log" containerID="cri-o://43f888398afecd79d9cf153ba1f77691ec6f544e6978783ea54a915703bfa839" gracePeriod=30 Jan 23 06:44:41 crc kubenswrapper[4784]: I0123 06:44:41.888734 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e1f57947-a8f3-4250-8d6b-9197d5a293b2" containerName="nova-metadata-metadata" containerID="cri-o://5e926b8bd80471188814e5a1400c0c8285188f77b62eddf099065fdb13eac7c3" gracePeriod=30 Jan 23 06:44:41 crc kubenswrapper[4784]: I0123 06:44:41.896187 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a","Type":"ContainerStarted","Data":"6ab2c7309b7fe54a0eba0c8709e78dcd4c52c67f2fa75ddc36af444951a79ee1"} Jan 23 06:44:41 crc kubenswrapper[4784]: I0123 06:44:41.900283 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8e7c2a29-4715-4c8d-80ac-a2a476a537af","Type":"ContainerStarted","Data":"113d5bd26b46642d3068487bf0ed3e41fa897b5a0c72eb80fe257089c060e66c"} Jan 23 06:44:41 crc kubenswrapper[4784]: I0123 06:44:41.913652 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.696924526 podStartE2EDuration="8.91362732s" podCreationTimestamp="2026-01-23 06:44:33 +0000 UTC" firstStartedPulling="2026-01-23 06:44:35.743235039 +0000 UTC m=+1478.975743013" lastFinishedPulling="2026-01-23 06:44:40.959937833 +0000 UTC m=+1484.192445807" observedRunningTime="2026-01-23 06:44:41.90347057 +0000 UTC m=+1485.135978544" watchObservedRunningTime="2026-01-23 06:44:41.91362732 +0000 UTC m=+1485.146135294" Jan 23 06:44:41 crc kubenswrapper[4784]: I0123 06:44:41.967329 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.640221252 podStartE2EDuration="8.967308002s" podCreationTimestamp="2026-01-23 06:44:33 +0000 UTC" firstStartedPulling="2026-01-23 06:44:35.707370187 +0000 UTC m=+1478.939878161" lastFinishedPulling="2026-01-23 06:44:41.034456937 +0000 UTC m=+1484.266964911" observedRunningTime="2026-01-23 06:44:41.955951192 +0000 UTC m=+1485.188459166" watchObservedRunningTime="2026-01-23 06:44:41.967308002 +0000 UTC m=+1485.199815976" Jan 23 06:44:41 crc kubenswrapper[4784]: I0123 06:44:41.999187 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.532078269 podStartE2EDuration="8.999145085s" podCreationTimestamp="2026-01-23 06:44:33 +0000 UTC" firstStartedPulling="2026-01-23 06:44:35.498852034 +0000 UTC m=+1478.731360008" lastFinishedPulling="2026-01-23 06:44:40.96591885 +0000 UTC m=+1484.198426824" observedRunningTime="2026-01-23 06:44:41.988544545 +0000 UTC m=+1485.221052529" watchObservedRunningTime="2026-01-23 06:44:41.999145085 +0000 UTC m=+1485.231653059" Jan 23 06:44:42 crc kubenswrapper[4784]: I0123 06:44:42.024722 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.276235971 podStartE2EDuration="9.024697785s" podCreationTimestamp="2026-01-23 06:44:33 +0000 UTC" firstStartedPulling="2026-01-23 06:44:35.154020704 +0000 UTC m=+1478.386528678" lastFinishedPulling="2026-01-23 06:44:40.902482518 +0000 UTC m=+1484.134990492" observedRunningTime="2026-01-23 06:44:42.023211548 +0000 UTC m=+1485.255719542" watchObservedRunningTime="2026-01-23 06:44:42.024697785 +0000 UTC m=+1485.257205759" Jan 23 06:44:42 crc kubenswrapper[4784]: I0123 06:44:42.920169 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a","Type":"ContainerStarted","Data":"1ec6a7f6cd2a631605ac7b5330692abd966d30f435044990277b61b5a25facca"} Jan 23 06:44:42 crc kubenswrapper[4784]: I0123 06:44:42.926475 4784 generic.go:334] "Generic (PLEG): container finished" podID="e1f57947-a8f3-4250-8d6b-9197d5a293b2" containerID="43f888398afecd79d9cf153ba1f77691ec6f544e6978783ea54a915703bfa839" exitCode=143 Jan 23 06:44:42 crc kubenswrapper[4784]: I0123 06:44:42.928093 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1f57947-a8f3-4250-8d6b-9197d5a293b2","Type":"ContainerDied","Data":"43f888398afecd79d9cf153ba1f77691ec6f544e6978783ea54a915703bfa839"} Jan 23 06:44:44 crc kubenswrapper[4784]: I0123 06:44:44.230442 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 06:44:44 crc kubenswrapper[4784]: I0123 06:44:44.230508 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 06:44:44 crc kubenswrapper[4784]: I0123 06:44:44.265741 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 06:44:44 crc kubenswrapper[4784]: I0123 06:44:44.447511 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:44:44 crc kubenswrapper[4784]: I0123 06:44:44.447606 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:44:44 crc kubenswrapper[4784]: I0123 06:44:44.516340 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:44:44 crc kubenswrapper[4784]: I0123 06:44:44.537134 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 06:44:44 crc kubenswrapper[4784]: I0123 06:44:44.537214 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 06:44:44 crc kubenswrapper[4784]: I0123 06:44:44.645918 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:44:44 crc kubenswrapper[4784]: I0123 06:44:44.756332 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vjn7m"] Jan 23 06:44:44 crc kubenswrapper[4784]: I0123 06:44:44.757091 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" podUID="750febeb-10c6-4c60-b3a8-de1e417213f4" containerName="dnsmasq-dns" containerID="cri-o://e10ba1bb494c3c49f0632a1f5c80940d6c7cee912fac556a037cd7f4cf53d8f3" gracePeriod=10 Jan 23 06:44:44 crc kubenswrapper[4784]: I0123 06:44:44.972111 4784 generic.go:334] "Generic (PLEG): container finished" podID="750febeb-10c6-4c60-b3a8-de1e417213f4" containerID="e10ba1bb494c3c49f0632a1f5c80940d6c7cee912fac556a037cd7f4cf53d8f3" exitCode=0 Jan 23 06:44:44 crc kubenswrapper[4784]: I0123 06:44:44.974212 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" event={"ID":"750febeb-10c6-4c60-b3a8-de1e417213f4","Type":"ContainerDied","Data":"e10ba1bb494c3c49f0632a1f5c80940d6c7cee912fac556a037cd7f4cf53d8f3"} Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.026480 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" podUID="750febeb-10c6-4c60-b3a8-de1e417213f4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.178:5353: connect: connection refused" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.336070 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.531170 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.531523 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.584912 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.706648 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-dns-svc\") pod \"750febeb-10c6-4c60-b3a8-de1e417213f4\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.706792 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-ovsdbserver-nb\") pod \"750febeb-10c6-4c60-b3a8-de1e417213f4\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.706863 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-config\") pod \"750febeb-10c6-4c60-b3a8-de1e417213f4\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.706994 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-ovsdbserver-sb\") pod \"750febeb-10c6-4c60-b3a8-de1e417213f4\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.707083 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq6mw\" (UniqueName: \"kubernetes.io/projected/750febeb-10c6-4c60-b3a8-de1e417213f4-kube-api-access-cq6mw\") pod \"750febeb-10c6-4c60-b3a8-de1e417213f4\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.707115 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-dns-swift-storage-0\") pod \"750febeb-10c6-4c60-b3a8-de1e417213f4\" (UID: \"750febeb-10c6-4c60-b3a8-de1e417213f4\") " Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.718650 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/750febeb-10c6-4c60-b3a8-de1e417213f4-kube-api-access-cq6mw" (OuterVolumeSpecName: "kube-api-access-cq6mw") pod "750febeb-10c6-4c60-b3a8-de1e417213f4" (UID: "750febeb-10c6-4c60-b3a8-de1e417213f4"). InnerVolumeSpecName "kube-api-access-cq6mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.789122 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "750febeb-10c6-4c60-b3a8-de1e417213f4" (UID: "750febeb-10c6-4c60-b3a8-de1e417213f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.805311 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "750febeb-10c6-4c60-b3a8-de1e417213f4" (UID: "750febeb-10c6-4c60-b3a8-de1e417213f4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.811361 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.811400 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq6mw\" (UniqueName: \"kubernetes.io/projected/750febeb-10c6-4c60-b3a8-de1e417213f4-kube-api-access-cq6mw\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.811414 4784 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.830016 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-config" (OuterVolumeSpecName: "config") pod "750febeb-10c6-4c60-b3a8-de1e417213f4" (UID: "750febeb-10c6-4c60-b3a8-de1e417213f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.837658 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "750febeb-10c6-4c60-b3a8-de1e417213f4" (UID: "750febeb-10c6-4c60-b3a8-de1e417213f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.843164 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "750febeb-10c6-4c60-b3a8-de1e417213f4" (UID: "750febeb-10c6-4c60-b3a8-de1e417213f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.913618 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.913664 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.913678 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/750febeb-10c6-4c60-b3a8-de1e417213f4-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.998296 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" event={"ID":"750febeb-10c6-4c60-b3a8-de1e417213f4","Type":"ContainerDied","Data":"30856eab4bba6d5aeb5d22a34b48f1d4d11986fc547858eeb6c168ef5921507e"} Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.998672 4784 scope.go:117] "RemoveContainer" containerID="e10ba1bb494c3c49f0632a1f5c80940d6c7cee912fac556a037cd7f4cf53d8f3" Jan 23 06:44:45 crc kubenswrapper[4784]: I0123 06:44:45.998344 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vjn7m" Jan 23 06:44:46 crc kubenswrapper[4784]: I0123 06:44:46.027741 4784 scope.go:117] "RemoveContainer" containerID="febd32782652ae77f74831c89fda60766f412d3ad4a5c80b91d35d58b9c1e39a" Jan 23 06:44:46 crc kubenswrapper[4784]: I0123 06:44:46.116881 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vjn7m"] Jan 23 06:44:46 crc kubenswrapper[4784]: I0123 06:44:46.135813 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vjn7m"] Jan 23 06:44:46 crc kubenswrapper[4784]: E0123 06:44:46.218336 4784 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod750febeb_10c6_4c60_b3a8_de1e417213f4.slice\": RecentStats: unable to find data in memory cache]" Jan 23 06:44:47 crc kubenswrapper[4784]: I0123 06:44:47.268989 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="750febeb-10c6-4c60-b3a8-de1e417213f4" path="/var/lib/kubelet/pods/750febeb-10c6-4c60-b3a8-de1e417213f4/volumes" Jan 23 06:44:49 crc kubenswrapper[4784]: I0123 06:44:49.038560 4784 generic.go:334] "Generic (PLEG): container finished" podID="c9104535-ee58-4cc4-ac36-18a922118bed" containerID="b621f79d732e8d839f37db0483f5411a10f308b98c40d2b8ee777e82fd03805f" exitCode=0 Jan 23 06:44:49 crc kubenswrapper[4784]: I0123 06:44:49.039015 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dtt27" event={"ID":"c9104535-ee58-4cc4-ac36-18a922118bed","Type":"ContainerDied","Data":"b621f79d732e8d839f37db0483f5411a10f308b98c40d2b8ee777e82fd03805f"} Jan 23 06:44:50 crc kubenswrapper[4784]: I0123 06:44:50.054556 4784 generic.go:334] "Generic (PLEG): container finished" podID="95c9045d-accf-4fe6-b22a-1b9cee39a56c" containerID="9b526ad1247934d3f7b8cb407807dd71a97aa2df829c42d4a18f7c173c29cf56" exitCode=0 Jan 23 06:44:50 crc kubenswrapper[4784]: I0123 06:44:50.054743 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t4pnl" event={"ID":"95c9045d-accf-4fe6-b22a-1b9cee39a56c","Type":"ContainerDied","Data":"9b526ad1247934d3f7b8cb407807dd71a97aa2df829c42d4a18f7c173c29cf56"} Jan 23 06:44:50 crc kubenswrapper[4784]: I0123 06:44:50.418529 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:50 crc kubenswrapper[4784]: I0123 06:44:50.540835 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dcg7\" (UniqueName: \"kubernetes.io/projected/c9104535-ee58-4cc4-ac36-18a922118bed-kube-api-access-2dcg7\") pod \"c9104535-ee58-4cc4-ac36-18a922118bed\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " Jan 23 06:44:50 crc kubenswrapper[4784]: I0123 06:44:50.541017 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-scripts\") pod \"c9104535-ee58-4cc4-ac36-18a922118bed\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " Jan 23 06:44:50 crc kubenswrapper[4784]: I0123 06:44:50.541069 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-config-data\") pod \"c9104535-ee58-4cc4-ac36-18a922118bed\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " Jan 23 06:44:50 crc kubenswrapper[4784]: I0123 06:44:50.541137 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-combined-ca-bundle\") pod \"c9104535-ee58-4cc4-ac36-18a922118bed\" (UID: \"c9104535-ee58-4cc4-ac36-18a922118bed\") " Jan 23 06:44:50 crc kubenswrapper[4784]: I0123 06:44:50.552262 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-scripts" (OuterVolumeSpecName: "scripts") pod "c9104535-ee58-4cc4-ac36-18a922118bed" (UID: "c9104535-ee58-4cc4-ac36-18a922118bed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:50 crc kubenswrapper[4784]: I0123 06:44:50.554522 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9104535-ee58-4cc4-ac36-18a922118bed-kube-api-access-2dcg7" (OuterVolumeSpecName: "kube-api-access-2dcg7") pod "c9104535-ee58-4cc4-ac36-18a922118bed" (UID: "c9104535-ee58-4cc4-ac36-18a922118bed"). InnerVolumeSpecName "kube-api-access-2dcg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:44:50 crc kubenswrapper[4784]: I0123 06:44:50.574368 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9104535-ee58-4cc4-ac36-18a922118bed" (UID: "c9104535-ee58-4cc4-ac36-18a922118bed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:50 crc kubenswrapper[4784]: I0123 06:44:50.576875 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-config-data" (OuterVolumeSpecName: "config-data") pod "c9104535-ee58-4cc4-ac36-18a922118bed" (UID: "c9104535-ee58-4cc4-ac36-18a922118bed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:50 crc kubenswrapper[4784]: I0123 06:44:50.644839 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dcg7\" (UniqueName: \"kubernetes.io/projected/c9104535-ee58-4cc4-ac36-18a922118bed-kube-api-access-2dcg7\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:50 crc kubenswrapper[4784]: I0123 06:44:50.644883 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:50 crc kubenswrapper[4784]: I0123 06:44:50.644899 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:50 crc kubenswrapper[4784]: I0123 06:44:50.644911 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9104535-ee58-4cc4-ac36-18a922118bed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.070426 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dtt27" Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.072025 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dtt27" event={"ID":"c9104535-ee58-4cc4-ac36-18a922118bed","Type":"ContainerDied","Data":"3ff363a39d0ed9851fa80cdb0d4ebf33366b04d756f58d926696b8666b9491fe"} Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.072096 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ff363a39d0ed9851fa80cdb0d4ebf33366b04d756f58d926696b8666b9491fe" Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.296355 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.297162 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" containerName="nova-api-log" containerID="cri-o://6ab2c7309b7fe54a0eba0c8709e78dcd4c52c67f2fa75ddc36af444951a79ee1" gracePeriod=30 Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.305162 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" containerName="nova-api-api" containerID="cri-o://1ec6a7f6cd2a631605ac7b5330692abd966d30f435044990277b61b5a25facca" gracePeriod=30 Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.333298 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.333659 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8e7c2a29-4715-4c8d-80ac-a2a476a537af" containerName="nova-scheduler-scheduler" containerID="cri-o://113d5bd26b46642d3068487bf0ed3e41fa897b5a0c72eb80fe257089c060e66c" gracePeriod=30 Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.495584 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.670339 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-combined-ca-bundle\") pod \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.670431 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdmnp\" (UniqueName: \"kubernetes.io/projected/95c9045d-accf-4fe6-b22a-1b9cee39a56c-kube-api-access-gdmnp\") pod \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.670507 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-config-data\") pod \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.670563 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-scripts\") pod \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\" (UID: \"95c9045d-accf-4fe6-b22a-1b9cee39a56c\") " Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.675696 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-scripts" (OuterVolumeSpecName: "scripts") pod "95c9045d-accf-4fe6-b22a-1b9cee39a56c" (UID: "95c9045d-accf-4fe6-b22a-1b9cee39a56c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.679788 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c9045d-accf-4fe6-b22a-1b9cee39a56c-kube-api-access-gdmnp" (OuterVolumeSpecName: "kube-api-access-gdmnp") pod "95c9045d-accf-4fe6-b22a-1b9cee39a56c" (UID: "95c9045d-accf-4fe6-b22a-1b9cee39a56c"). InnerVolumeSpecName "kube-api-access-gdmnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.714003 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-config-data" (OuterVolumeSpecName: "config-data") pod "95c9045d-accf-4fe6-b22a-1b9cee39a56c" (UID: "95c9045d-accf-4fe6-b22a-1b9cee39a56c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.716794 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95c9045d-accf-4fe6-b22a-1b9cee39a56c" (UID: "95c9045d-accf-4fe6-b22a-1b9cee39a56c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.775429 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.775486 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.775500 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c9045d-accf-4fe6-b22a-1b9cee39a56c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:51 crc kubenswrapper[4784]: I0123 06:44:51.775518 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdmnp\" (UniqueName: \"kubernetes.io/projected/95c9045d-accf-4fe6-b22a-1b9cee39a56c-kube-api-access-gdmnp\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.085012 4784 generic.go:334] "Generic (PLEG): container finished" podID="d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" containerID="6ab2c7309b7fe54a0eba0c8709e78dcd4c52c67f2fa75ddc36af444951a79ee1" exitCode=143 Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.085106 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a","Type":"ContainerDied","Data":"6ab2c7309b7fe54a0eba0c8709e78dcd4c52c67f2fa75ddc36af444951a79ee1"} Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.087240 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t4pnl" event={"ID":"95c9045d-accf-4fe6-b22a-1b9cee39a56c","Type":"ContainerDied","Data":"f38b679fdcbf4932739a1c915a73b0a74bb06dccc57e5af7a10bdf9e62822193"} Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.087269 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f38b679fdcbf4932739a1c915a73b0a74bb06dccc57e5af7a10bdf9e62822193" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.087364 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t4pnl" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.201166 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 06:44:52 crc kubenswrapper[4784]: E0123 06:44:52.201780 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750febeb-10c6-4c60-b3a8-de1e417213f4" containerName="init" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.201800 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="750febeb-10c6-4c60-b3a8-de1e417213f4" containerName="init" Jan 23 06:44:52 crc kubenswrapper[4784]: E0123 06:44:52.201831 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c9045d-accf-4fe6-b22a-1b9cee39a56c" containerName="nova-cell1-conductor-db-sync" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.201839 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c9045d-accf-4fe6-b22a-1b9cee39a56c" containerName="nova-cell1-conductor-db-sync" Jan 23 06:44:52 crc kubenswrapper[4784]: E0123 06:44:52.201855 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750febeb-10c6-4c60-b3a8-de1e417213f4" containerName="dnsmasq-dns" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.201863 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="750febeb-10c6-4c60-b3a8-de1e417213f4" containerName="dnsmasq-dns" Jan 23 06:44:52 crc kubenswrapper[4784]: E0123 06:44:52.201875 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9104535-ee58-4cc4-ac36-18a922118bed" containerName="nova-manage" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.201881 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9104535-ee58-4cc4-ac36-18a922118bed" containerName="nova-manage" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.202071 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9104535-ee58-4cc4-ac36-18a922118bed" containerName="nova-manage" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.202109 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c9045d-accf-4fe6-b22a-1b9cee39a56c" containerName="nova-cell1-conductor-db-sync" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.202139 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="750febeb-10c6-4c60-b3a8-de1e417213f4" containerName="dnsmasq-dns" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.202985 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.205258 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.259368 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.391037 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gndbd\" (UniqueName: \"kubernetes.io/projected/d47922f2-9fc2-41d3-bd0b-7df1a238a218-kube-api-access-gndbd\") pod \"nova-cell1-conductor-0\" (UID: \"d47922f2-9fc2-41d3-bd0b-7df1a238a218\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.392466 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d47922f2-9fc2-41d3-bd0b-7df1a238a218-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d47922f2-9fc2-41d3-bd0b-7df1a238a218\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.392659 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47922f2-9fc2-41d3-bd0b-7df1a238a218-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d47922f2-9fc2-41d3-bd0b-7df1a238a218\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.495854 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47922f2-9fc2-41d3-bd0b-7df1a238a218-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d47922f2-9fc2-41d3-bd0b-7df1a238a218\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.496406 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gndbd\" (UniqueName: \"kubernetes.io/projected/d47922f2-9fc2-41d3-bd0b-7df1a238a218-kube-api-access-gndbd\") pod \"nova-cell1-conductor-0\" (UID: \"d47922f2-9fc2-41d3-bd0b-7df1a238a218\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.496490 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d47922f2-9fc2-41d3-bd0b-7df1a238a218-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d47922f2-9fc2-41d3-bd0b-7df1a238a218\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.507553 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d47922f2-9fc2-41d3-bd0b-7df1a238a218-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d47922f2-9fc2-41d3-bd0b-7df1a238a218\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.513472 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47922f2-9fc2-41d3-bd0b-7df1a238a218-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d47922f2-9fc2-41d3-bd0b-7df1a238a218\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.516953 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gndbd\" (UniqueName: \"kubernetes.io/projected/d47922f2-9fc2-41d3-bd0b-7df1a238a218-kube-api-access-gndbd\") pod \"nova-cell1-conductor-0\" (UID: \"d47922f2-9fc2-41d3-bd0b-7df1a238a218\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:44:52 crc kubenswrapper[4784]: I0123 06:44:52.557051 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 06:44:53 crc kubenswrapper[4784]: I0123 06:44:53.120577 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.113401 4784 generic.go:334] "Generic (PLEG): container finished" podID="8e7c2a29-4715-4c8d-80ac-a2a476a537af" containerID="113d5bd26b46642d3068487bf0ed3e41fa897b5a0c72eb80fe257089c060e66c" exitCode=0 Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.113614 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8e7c2a29-4715-4c8d-80ac-a2a476a537af","Type":"ContainerDied","Data":"113d5bd26b46642d3068487bf0ed3e41fa897b5a0c72eb80fe257089c060e66c"} Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.113906 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8e7c2a29-4715-4c8d-80ac-a2a476a537af","Type":"ContainerDied","Data":"5f058bfe89262290336e72e702c061d7733741e9e484ce267c90acfa9a13e66e"} Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.113928 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f058bfe89262290336e72e702c061d7733741e9e484ce267c90acfa9a13e66e" Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.115827 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d47922f2-9fc2-41d3-bd0b-7df1a238a218","Type":"ContainerStarted","Data":"3d35b58cb09a2b1356298911b3577231d7bb6caf199ec79b64da2543172dce24"} Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.115898 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d47922f2-9fc2-41d3-bd0b-7df1a238a218","Type":"ContainerStarted","Data":"7e594845e2ffb81a388084172a2fcf1d4e12abe2cfaf4ab328bf7d176c37ab70"} Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.118960 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.154708 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.154674285 podStartE2EDuration="2.154674285s" podCreationTimestamp="2026-01-23 06:44:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:44:54.146155106 +0000 UTC m=+1497.378663120" watchObservedRunningTime="2026-01-23 06:44:54.154674285 +0000 UTC m=+1497.387182259" Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.198073 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.353721 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e7c2a29-4715-4c8d-80ac-a2a476a537af-combined-ca-bundle\") pod \"8e7c2a29-4715-4c8d-80ac-a2a476a537af\" (UID: \"8e7c2a29-4715-4c8d-80ac-a2a476a537af\") " Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.354711 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e7c2a29-4715-4c8d-80ac-a2a476a537af-config-data\") pod \"8e7c2a29-4715-4c8d-80ac-a2a476a537af\" (UID: \"8e7c2a29-4715-4c8d-80ac-a2a476a537af\") " Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.355230 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62cn5\" (UniqueName: \"kubernetes.io/projected/8e7c2a29-4715-4c8d-80ac-a2a476a537af-kube-api-access-62cn5\") pod \"8e7c2a29-4715-4c8d-80ac-a2a476a537af\" (UID: \"8e7c2a29-4715-4c8d-80ac-a2a476a537af\") " Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.362375 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e7c2a29-4715-4c8d-80ac-a2a476a537af-kube-api-access-62cn5" (OuterVolumeSpecName: "kube-api-access-62cn5") pod "8e7c2a29-4715-4c8d-80ac-a2a476a537af" (UID: "8e7c2a29-4715-4c8d-80ac-a2a476a537af"). InnerVolumeSpecName "kube-api-access-62cn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.393250 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e7c2a29-4715-4c8d-80ac-a2a476a537af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e7c2a29-4715-4c8d-80ac-a2a476a537af" (UID: "8e7c2a29-4715-4c8d-80ac-a2a476a537af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.412989 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e7c2a29-4715-4c8d-80ac-a2a476a537af-config-data" (OuterVolumeSpecName: "config-data") pod "8e7c2a29-4715-4c8d-80ac-a2a476a537af" (UID: "8e7c2a29-4715-4c8d-80ac-a2a476a537af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.458320 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62cn5\" (UniqueName: \"kubernetes.io/projected/8e7c2a29-4715-4c8d-80ac-a2a476a537af-kube-api-access-62cn5\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.458389 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e7c2a29-4715-4c8d-80ac-a2a476a537af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.458402 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e7c2a29-4715-4c8d-80ac-a2a476a537af-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:54 crc kubenswrapper[4784]: I0123 06:44:54.969288 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.080829 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rplkd\" (UniqueName: \"kubernetes.io/projected/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-kube-api-access-rplkd\") pod \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.080905 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-config-data\") pod \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.080985 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-logs\") pod \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.081200 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-combined-ca-bundle\") pod \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\" (UID: \"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a\") " Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.081387 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-logs" (OuterVolumeSpecName: "logs") pod "d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" (UID: "d27f45e9-7a35-4c14-a1d0-630e22ef2b8a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.081929 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.087879 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-kube-api-access-rplkd" (OuterVolumeSpecName: "kube-api-access-rplkd") pod "d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" (UID: "d27f45e9-7a35-4c14-a1d0-630e22ef2b8a"). InnerVolumeSpecName "kube-api-access-rplkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.116881 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" (UID: "d27f45e9-7a35-4c14-a1d0-630e22ef2b8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.126405 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-config-data" (OuterVolumeSpecName: "config-data") pod "d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" (UID: "d27f45e9-7a35-4c14-a1d0-630e22ef2b8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.134413 4784 generic.go:334] "Generic (PLEG): container finished" podID="d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" containerID="1ec6a7f6cd2a631605ac7b5330692abd966d30f435044990277b61b5a25facca" exitCode=0 Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.134541 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.134770 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a","Type":"ContainerDied","Data":"1ec6a7f6cd2a631605ac7b5330692abd966d30f435044990277b61b5a25facca"} Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.134869 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d27f45e9-7a35-4c14-a1d0-630e22ef2b8a","Type":"ContainerDied","Data":"f1e6633007e4cabf4e81a9f46516369af4fa1579251d732a3e9addceffb0d3ad"} Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.134821 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.134932 4784 scope.go:117] "RemoveContainer" containerID="1ec6a7f6cd2a631605ac7b5330692abd966d30f435044990277b61b5a25facca" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.184769 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.184815 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rplkd\" (UniqueName: \"kubernetes.io/projected/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-kube-api-access-rplkd\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.184830 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.218446 4784 scope.go:117] "RemoveContainer" containerID="6ab2c7309b7fe54a0eba0c8709e78dcd4c52c67f2fa75ddc36af444951a79ee1" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.274510 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.295135 4784 scope.go:117] "RemoveContainer" containerID="1ec6a7f6cd2a631605ac7b5330692abd966d30f435044990277b61b5a25facca" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.295475 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:44:55 crc kubenswrapper[4784]: E0123 06:44:55.296973 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ec6a7f6cd2a631605ac7b5330692abd966d30f435044990277b61b5a25facca\": container with ID starting with 1ec6a7f6cd2a631605ac7b5330692abd966d30f435044990277b61b5a25facca not found: ID does not exist" containerID="1ec6a7f6cd2a631605ac7b5330692abd966d30f435044990277b61b5a25facca" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.297012 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec6a7f6cd2a631605ac7b5330692abd966d30f435044990277b61b5a25facca"} err="failed to get container status \"1ec6a7f6cd2a631605ac7b5330692abd966d30f435044990277b61b5a25facca\": rpc error: code = NotFound desc = could not find container \"1ec6a7f6cd2a631605ac7b5330692abd966d30f435044990277b61b5a25facca\": container with ID starting with 1ec6a7f6cd2a631605ac7b5330692abd966d30f435044990277b61b5a25facca not found: ID does not exist" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.297038 4784 scope.go:117] "RemoveContainer" containerID="6ab2c7309b7fe54a0eba0c8709e78dcd4c52c67f2fa75ddc36af444951a79ee1" Jan 23 06:44:55 crc kubenswrapper[4784]: E0123 06:44:55.300904 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ab2c7309b7fe54a0eba0c8709e78dcd4c52c67f2fa75ddc36af444951a79ee1\": container with ID starting with 6ab2c7309b7fe54a0eba0c8709e78dcd4c52c67f2fa75ddc36af444951a79ee1 not found: ID does not exist" containerID="6ab2c7309b7fe54a0eba0c8709e78dcd4c52c67f2fa75ddc36af444951a79ee1" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.300953 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ab2c7309b7fe54a0eba0c8709e78dcd4c52c67f2fa75ddc36af444951a79ee1"} err="failed to get container status \"6ab2c7309b7fe54a0eba0c8709e78dcd4c52c67f2fa75ddc36af444951a79ee1\": rpc error: code = NotFound desc = could not find container \"6ab2c7309b7fe54a0eba0c8709e78dcd4c52c67f2fa75ddc36af444951a79ee1\": container with ID starting with 6ab2c7309b7fe54a0eba0c8709e78dcd4c52c67f2fa75ddc36af444951a79ee1 not found: ID does not exist" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.315626 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.336300 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.352616 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 06:44:55 crc kubenswrapper[4784]: E0123 06:44:55.353495 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7c2a29-4715-4c8d-80ac-a2a476a537af" containerName="nova-scheduler-scheduler" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.353528 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7c2a29-4715-4c8d-80ac-a2a476a537af" containerName="nova-scheduler-scheduler" Jan 23 06:44:55 crc kubenswrapper[4784]: E0123 06:44:55.353561 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" containerName="nova-api-log" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.353572 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" containerName="nova-api-log" Jan 23 06:44:55 crc kubenswrapper[4784]: E0123 06:44:55.353609 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" containerName="nova-api-api" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.353620 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" containerName="nova-api-api" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.354007 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" containerName="nova-api-log" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.354036 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e7c2a29-4715-4c8d-80ac-a2a476a537af" containerName="nova-scheduler-scheduler" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.354067 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" containerName="nova-api-api" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.356065 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.359731 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.371821 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.379878 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.387639 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.423595 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.444736 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.504782 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.504928 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-config-data\") pod \"nova-api-0\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.505162 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvnll\" (UniqueName: \"kubernetes.io/projected/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-kube-api-access-jvnll\") pod \"nova-api-0\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.505351 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-logs\") pod \"nova-api-0\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.505403 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvxzd\" (UniqueName: \"kubernetes.io/projected/363e2891-7d58-44f8-9404-6f62b57a87c8-kube-api-access-wvxzd\") pod \"nova-scheduler-0\" (UID: \"363e2891-7d58-44f8-9404-6f62b57a87c8\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.505641 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363e2891-7d58-44f8-9404-6f62b57a87c8-config-data\") pod \"nova-scheduler-0\" (UID: \"363e2891-7d58-44f8-9404-6f62b57a87c8\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.505887 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363e2891-7d58-44f8-9404-6f62b57a87c8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"363e2891-7d58-44f8-9404-6f62b57a87c8\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.609833 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363e2891-7d58-44f8-9404-6f62b57a87c8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"363e2891-7d58-44f8-9404-6f62b57a87c8\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.610043 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.610110 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-config-data\") pod \"nova-api-0\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.610199 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvnll\" (UniqueName: \"kubernetes.io/projected/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-kube-api-access-jvnll\") pod \"nova-api-0\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.610276 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-logs\") pod \"nova-api-0\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.610318 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvxzd\" (UniqueName: \"kubernetes.io/projected/363e2891-7d58-44f8-9404-6f62b57a87c8-kube-api-access-wvxzd\") pod \"nova-scheduler-0\" (UID: \"363e2891-7d58-44f8-9404-6f62b57a87c8\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.610484 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363e2891-7d58-44f8-9404-6f62b57a87c8-config-data\") pod \"nova-scheduler-0\" (UID: \"363e2891-7d58-44f8-9404-6f62b57a87c8\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.610944 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-logs\") pod \"nova-api-0\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.614670 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363e2891-7d58-44f8-9404-6f62b57a87c8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"363e2891-7d58-44f8-9404-6f62b57a87c8\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.615326 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-config-data\") pod \"nova-api-0\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.616383 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363e2891-7d58-44f8-9404-6f62b57a87c8-config-data\") pod \"nova-scheduler-0\" (UID: \"363e2891-7d58-44f8-9404-6f62b57a87c8\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.620419 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.629500 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvnll\" (UniqueName: \"kubernetes.io/projected/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-kube-api-access-jvnll\") pod \"nova-api-0\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.632430 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvxzd\" (UniqueName: \"kubernetes.io/projected/363e2891-7d58-44f8-9404-6f62b57a87c8-kube-api-access-wvxzd\") pod \"nova-scheduler-0\" (UID: \"363e2891-7d58-44f8-9404-6f62b57a87c8\") " pod="openstack/nova-scheduler-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.687600 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:44:55 crc kubenswrapper[4784]: I0123 06:44:55.704249 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:44:56 crc kubenswrapper[4784]: I0123 06:44:56.240651 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:44:56 crc kubenswrapper[4784]: W0123 06:44:56.252442 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod363e2891_7d58_44f8_9404_6f62b57a87c8.slice/crio-25c8bc56e12eec39dc553e606ce80a6c8656dc043ec23ed6c7f87bfd07ca099e WatchSource:0}: Error finding container 25c8bc56e12eec39dc553e606ce80a6c8656dc043ec23ed6c7f87bfd07ca099e: Status 404 returned error can't find the container with id 25c8bc56e12eec39dc553e606ce80a6c8656dc043ec23ed6c7f87bfd07ca099e Jan 23 06:44:56 crc kubenswrapper[4784]: I0123 06:44:56.344274 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:44:57 crc kubenswrapper[4784]: I0123 06:44:57.190188 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f0e8e43-7110-45ca-8be2-59c0150b3ac4","Type":"ContainerStarted","Data":"7fc12651415f5cab47af875599e9b9d5d591e3ab4ea60af764c712632dfe60a9"} Jan 23 06:44:57 crc kubenswrapper[4784]: I0123 06:44:57.190787 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f0e8e43-7110-45ca-8be2-59c0150b3ac4","Type":"ContainerStarted","Data":"5089493af2e07bbfacb94c48a96faebacddaa28ef9d516b93e8f7fc1aa03a30a"} Jan 23 06:44:57 crc kubenswrapper[4784]: I0123 06:44:57.190828 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f0e8e43-7110-45ca-8be2-59c0150b3ac4","Type":"ContainerStarted","Data":"b98ec623869268659d3733ccf4fa347ae348252748adb3f8b6c28f1ba3ec27dd"} Jan 23 06:44:57 crc kubenswrapper[4784]: I0123 06:44:57.194009 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"363e2891-7d58-44f8-9404-6f62b57a87c8","Type":"ContainerStarted","Data":"933f6695e90ee42a3851694317be8b5def78d304fbfe7d0d6b0f20f910900f7e"} Jan 23 06:44:57 crc kubenswrapper[4784]: I0123 06:44:57.194042 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"363e2891-7d58-44f8-9404-6f62b57a87c8","Type":"ContainerStarted","Data":"25c8bc56e12eec39dc553e606ce80a6c8656dc043ec23ed6c7f87bfd07ca099e"} Jan 23 06:44:57 crc kubenswrapper[4784]: I0123 06:44:57.230186 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.230160316 podStartE2EDuration="2.230160316s" podCreationTimestamp="2026-01-23 06:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:44:57.218890628 +0000 UTC m=+1500.451398672" watchObservedRunningTime="2026-01-23 06:44:57.230160316 +0000 UTC m=+1500.462668300" Jan 23 06:44:57 crc kubenswrapper[4784]: I0123 06:44:57.244645 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.244622802 podStartE2EDuration="2.244622802s" podCreationTimestamp="2026-01-23 06:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:44:57.2384707 +0000 UTC m=+1500.470978714" watchObservedRunningTime="2026-01-23 06:44:57.244622802 +0000 UTC m=+1500.477130776" Jan 23 06:44:57 crc kubenswrapper[4784]: I0123 06:44:57.275572 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e7c2a29-4715-4c8d-80ac-a2a476a537af" path="/var/lib/kubelet/pods/8e7c2a29-4715-4c8d-80ac-a2a476a537af/volumes" Jan 23 06:44:57 crc kubenswrapper[4784]: I0123 06:44:57.276592 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d27f45e9-7a35-4c14-a1d0-630e22ef2b8a" path="/var/lib/kubelet/pods/d27f45e9-7a35-4c14-a1d0-630e22ef2b8a/volumes" Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.010868 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.154357 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h"] Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.156395 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.159247 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.160211 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.171320 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h"] Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.337937 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg4kz\" (UniqueName: \"kubernetes.io/projected/77883559-de68-40d3-9375-f9ee148ccf9b-kube-api-access-jg4kz\") pod \"collect-profiles-29485845-w695h\" (UID: \"77883559-de68-40d3-9375-f9ee148ccf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.338011 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77883559-de68-40d3-9375-f9ee148ccf9b-secret-volume\") pod \"collect-profiles-29485845-w695h\" (UID: \"77883559-de68-40d3-9375-f9ee148ccf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.338121 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77883559-de68-40d3-9375-f9ee148ccf9b-config-volume\") pod \"collect-profiles-29485845-w695h\" (UID: \"77883559-de68-40d3-9375-f9ee148ccf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.441339 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg4kz\" (UniqueName: \"kubernetes.io/projected/77883559-de68-40d3-9375-f9ee148ccf9b-kube-api-access-jg4kz\") pod \"collect-profiles-29485845-w695h\" (UID: \"77883559-de68-40d3-9375-f9ee148ccf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.441885 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77883559-de68-40d3-9375-f9ee148ccf9b-secret-volume\") pod \"collect-profiles-29485845-w695h\" (UID: \"77883559-de68-40d3-9375-f9ee148ccf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.442197 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77883559-de68-40d3-9375-f9ee148ccf9b-config-volume\") pod \"collect-profiles-29485845-w695h\" (UID: \"77883559-de68-40d3-9375-f9ee148ccf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.443226 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77883559-de68-40d3-9375-f9ee148ccf9b-config-volume\") pod \"collect-profiles-29485845-w695h\" (UID: \"77883559-de68-40d3-9375-f9ee148ccf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.450034 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77883559-de68-40d3-9375-f9ee148ccf9b-secret-volume\") pod \"collect-profiles-29485845-w695h\" (UID: \"77883559-de68-40d3-9375-f9ee148ccf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.465062 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg4kz\" (UniqueName: \"kubernetes.io/projected/77883559-de68-40d3-9375-f9ee148ccf9b-kube-api-access-jg4kz\") pod \"collect-profiles-29485845-w695h\" (UID: \"77883559-de68-40d3-9375-f9ee148ccf9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.486526 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" Jan 23 06:45:00 crc kubenswrapper[4784]: I0123 06:45:00.705791 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 06:45:01 crc kubenswrapper[4784]: I0123 06:45:01.010572 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h"] Jan 23 06:45:01 crc kubenswrapper[4784]: I0123 06:45:01.247854 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" event={"ID":"77883559-de68-40d3-9375-f9ee148ccf9b","Type":"ContainerStarted","Data":"16d584d4be952b2165e9d4d80e01885da4fbcf50ae5f7ccb6c90d5bc19c063c1"} Jan 23 06:45:02 crc kubenswrapper[4784]: I0123 06:45:02.260893 4784 generic.go:334] "Generic (PLEG): container finished" podID="77883559-de68-40d3-9375-f9ee148ccf9b" containerID="6462b177a2054edd05b2af376e02f699a1f2f96bacbda24887687c093699c490" exitCode=0 Jan 23 06:45:02 crc kubenswrapper[4784]: I0123 06:45:02.261025 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" event={"ID":"77883559-de68-40d3-9375-f9ee148ccf9b","Type":"ContainerDied","Data":"6462b177a2054edd05b2af376e02f699a1f2f96bacbda24887687c093699c490"} Jan 23 06:45:02 crc kubenswrapper[4784]: I0123 06:45:02.607137 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 23 06:45:03 crc kubenswrapper[4784]: I0123 06:45:03.723959 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" Jan 23 06:45:03 crc kubenswrapper[4784]: I0123 06:45:03.840783 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg4kz\" (UniqueName: \"kubernetes.io/projected/77883559-de68-40d3-9375-f9ee148ccf9b-kube-api-access-jg4kz\") pod \"77883559-de68-40d3-9375-f9ee148ccf9b\" (UID: \"77883559-de68-40d3-9375-f9ee148ccf9b\") " Jan 23 06:45:03 crc kubenswrapper[4784]: I0123 06:45:03.840843 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77883559-de68-40d3-9375-f9ee148ccf9b-secret-volume\") pod \"77883559-de68-40d3-9375-f9ee148ccf9b\" (UID: \"77883559-de68-40d3-9375-f9ee148ccf9b\") " Jan 23 06:45:03 crc kubenswrapper[4784]: I0123 06:45:03.840926 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77883559-de68-40d3-9375-f9ee148ccf9b-config-volume\") pod \"77883559-de68-40d3-9375-f9ee148ccf9b\" (UID: \"77883559-de68-40d3-9375-f9ee148ccf9b\") " Jan 23 06:45:03 crc kubenswrapper[4784]: I0123 06:45:03.842136 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77883559-de68-40d3-9375-f9ee148ccf9b-config-volume" (OuterVolumeSpecName: "config-volume") pod "77883559-de68-40d3-9375-f9ee148ccf9b" (UID: "77883559-de68-40d3-9375-f9ee148ccf9b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:45:03 crc kubenswrapper[4784]: I0123 06:45:03.849802 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77883559-de68-40d3-9375-f9ee148ccf9b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "77883559-de68-40d3-9375-f9ee148ccf9b" (UID: "77883559-de68-40d3-9375-f9ee148ccf9b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:03 crc kubenswrapper[4784]: I0123 06:45:03.857327 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77883559-de68-40d3-9375-f9ee148ccf9b-kube-api-access-jg4kz" (OuterVolumeSpecName: "kube-api-access-jg4kz") pod "77883559-de68-40d3-9375-f9ee148ccf9b" (UID: "77883559-de68-40d3-9375-f9ee148ccf9b"). InnerVolumeSpecName "kube-api-access-jg4kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:45:03 crc kubenswrapper[4784]: I0123 06:45:03.944173 4784 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77883559-de68-40d3-9375-f9ee148ccf9b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:03 crc kubenswrapper[4784]: I0123 06:45:03.944237 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg4kz\" (UniqueName: \"kubernetes.io/projected/77883559-de68-40d3-9375-f9ee148ccf9b-kube-api-access-jg4kz\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:03 crc kubenswrapper[4784]: I0123 06:45:03.944270 4784 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77883559-de68-40d3-9375-f9ee148ccf9b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:04 crc kubenswrapper[4784]: I0123 06:45:04.065922 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:45:04 crc kubenswrapper[4784]: I0123 06:45:04.066322 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2c542d52-d20d-41d2-8b80-fb2a9bf5bafa" containerName="kube-state-metrics" containerID="cri-o://19aee540d361654a29a3fe2e73d6a71083ea09edff738b6a6503e1690dea7972" gracePeriod=30 Jan 23 06:45:04 crc kubenswrapper[4784]: I0123 06:45:04.286838 4784 generic.go:334] "Generic (PLEG): container finished" podID="2c542d52-d20d-41d2-8b80-fb2a9bf5bafa" containerID="19aee540d361654a29a3fe2e73d6a71083ea09edff738b6a6503e1690dea7972" exitCode=2 Jan 23 06:45:04 crc kubenswrapper[4784]: I0123 06:45:04.286932 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2c542d52-d20d-41d2-8b80-fb2a9bf5bafa","Type":"ContainerDied","Data":"19aee540d361654a29a3fe2e73d6a71083ea09edff738b6a6503e1690dea7972"} Jan 23 06:45:04 crc kubenswrapper[4784]: I0123 06:45:04.291008 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" event={"ID":"77883559-de68-40d3-9375-f9ee148ccf9b","Type":"ContainerDied","Data":"16d584d4be952b2165e9d4d80e01885da4fbcf50ae5f7ccb6c90d5bc19c063c1"} Jan 23 06:45:04 crc kubenswrapper[4784]: I0123 06:45:04.291054 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d584d4be952b2165e9d4d80e01885da4fbcf50ae5f7ccb6c90d5bc19c063c1" Jan 23 06:45:04 crc kubenswrapper[4784]: I0123 06:45:04.291082 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h" Jan 23 06:45:04 crc kubenswrapper[4784]: I0123 06:45:04.511630 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 06:45:04 crc kubenswrapper[4784]: I0123 06:45:04.659598 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntprq\" (UniqueName: \"kubernetes.io/projected/2c542d52-d20d-41d2-8b80-fb2a9bf5bafa-kube-api-access-ntprq\") pod \"2c542d52-d20d-41d2-8b80-fb2a9bf5bafa\" (UID: \"2c542d52-d20d-41d2-8b80-fb2a9bf5bafa\") " Jan 23 06:45:04 crc kubenswrapper[4784]: I0123 06:45:04.666896 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c542d52-d20d-41d2-8b80-fb2a9bf5bafa-kube-api-access-ntprq" (OuterVolumeSpecName: "kube-api-access-ntprq") pod "2c542d52-d20d-41d2-8b80-fb2a9bf5bafa" (UID: "2c542d52-d20d-41d2-8b80-fb2a9bf5bafa"). InnerVolumeSpecName "kube-api-access-ntprq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:45:04 crc kubenswrapper[4784]: I0123 06:45:04.763450 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntprq\" (UniqueName: \"kubernetes.io/projected/2c542d52-d20d-41d2-8b80-fb2a9bf5bafa-kube-api-access-ntprq\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.323065 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.323140 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2c542d52-d20d-41d2-8b80-fb2a9bf5bafa","Type":"ContainerDied","Data":"851c61258df9ce6aed9cdea63dcdfe3ef9704a8f0d6eb006a79e5111c6a26dc1"} Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.323232 4784 scope.go:117] "RemoveContainer" containerID="19aee540d361654a29a3fe2e73d6a71083ea09edff738b6a6503e1690dea7972" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.364767 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.392625 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.407421 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:45:05 crc kubenswrapper[4784]: E0123 06:45:05.408104 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c542d52-d20d-41d2-8b80-fb2a9bf5bafa" containerName="kube-state-metrics" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.408129 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c542d52-d20d-41d2-8b80-fb2a9bf5bafa" containerName="kube-state-metrics" Jan 23 06:45:05 crc kubenswrapper[4784]: E0123 06:45:05.408157 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77883559-de68-40d3-9375-f9ee148ccf9b" containerName="collect-profiles" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.408165 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="77883559-de68-40d3-9375-f9ee148ccf9b" containerName="collect-profiles" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.408486 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="77883559-de68-40d3-9375-f9ee148ccf9b" containerName="collect-profiles" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.408516 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c542d52-d20d-41d2-8b80-fb2a9bf5bafa" containerName="kube-state-metrics" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.409617 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.412667 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.412995 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.420431 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.484282 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e84d3df-4011-472a-9b95-9ed21dea27d5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4e84d3df-4011-472a-9b95-9ed21dea27d5\") " pod="openstack/kube-state-metrics-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.484353 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e84d3df-4011-472a-9b95-9ed21dea27d5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4e84d3df-4011-472a-9b95-9ed21dea27d5\") " pod="openstack/kube-state-metrics-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.484461 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t54zz\" (UniqueName: \"kubernetes.io/projected/4e84d3df-4011-472a-9b95-9ed21dea27d5-kube-api-access-t54zz\") pod \"kube-state-metrics-0\" (UID: \"4e84d3df-4011-472a-9b95-9ed21dea27d5\") " pod="openstack/kube-state-metrics-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.484501 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4e84d3df-4011-472a-9b95-9ed21dea27d5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4e84d3df-4011-472a-9b95-9ed21dea27d5\") " pod="openstack/kube-state-metrics-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.587024 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t54zz\" (UniqueName: \"kubernetes.io/projected/4e84d3df-4011-472a-9b95-9ed21dea27d5-kube-api-access-t54zz\") pod \"kube-state-metrics-0\" (UID: \"4e84d3df-4011-472a-9b95-9ed21dea27d5\") " pod="openstack/kube-state-metrics-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.587107 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4e84d3df-4011-472a-9b95-9ed21dea27d5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4e84d3df-4011-472a-9b95-9ed21dea27d5\") " pod="openstack/kube-state-metrics-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.587246 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e84d3df-4011-472a-9b95-9ed21dea27d5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4e84d3df-4011-472a-9b95-9ed21dea27d5\") " pod="openstack/kube-state-metrics-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.587296 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e84d3df-4011-472a-9b95-9ed21dea27d5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4e84d3df-4011-472a-9b95-9ed21dea27d5\") " pod="openstack/kube-state-metrics-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.594966 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e84d3df-4011-472a-9b95-9ed21dea27d5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4e84d3df-4011-472a-9b95-9ed21dea27d5\") " pod="openstack/kube-state-metrics-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.594983 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e84d3df-4011-472a-9b95-9ed21dea27d5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4e84d3df-4011-472a-9b95-9ed21dea27d5\") " pod="openstack/kube-state-metrics-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.604623 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4e84d3df-4011-472a-9b95-9ed21dea27d5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4e84d3df-4011-472a-9b95-9ed21dea27d5\") " pod="openstack/kube-state-metrics-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.608282 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t54zz\" (UniqueName: \"kubernetes.io/projected/4e84d3df-4011-472a-9b95-9ed21dea27d5-kube-api-access-t54zz\") pod \"kube-state-metrics-0\" (UID: \"4e84d3df-4011-472a-9b95-9ed21dea27d5\") " pod="openstack/kube-state-metrics-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.688939 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.689330 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.705557 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.739231 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 06:45:05 crc kubenswrapper[4784]: I0123 06:45:05.749899 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 06:45:06 crc kubenswrapper[4784]: I0123 06:45:06.343876 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:45:06 crc kubenswrapper[4784]: I0123 06:45:06.397552 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 06:45:06 crc kubenswrapper[4784]: I0123 06:45:06.513828 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:06 crc kubenswrapper[4784]: I0123 06:45:06.514290 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="ceilometer-central-agent" containerID="cri-o://5ddf7d3d2dffe8825eee3d441e6cae1d0e455fff705315494a8e412f89ce4736" gracePeriod=30 Jan 23 06:45:06 crc kubenswrapper[4784]: I0123 06:45:06.514343 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="sg-core" containerID="cri-o://2f5aeaf3479941ead773ffced2ab64a3f88e90846903b9220163a292d732add7" gracePeriod=30 Jan 23 06:45:06 crc kubenswrapper[4784]: I0123 06:45:06.514399 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="ceilometer-notification-agent" containerID="cri-o://5f55d7ab420982ca51cb56f93ff9137975fed8432a56e468e127f2464be6ca47" gracePeriod=30 Jan 23 06:45:06 crc kubenswrapper[4784]: I0123 06:45:06.514362 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="proxy-httpd" containerID="cri-o://92ac8b56ff04f43f9d573109c7f87ad05183bd964a308d6a7ef760f2eba5290c" gracePeriod=30 Jan 23 06:45:06 crc kubenswrapper[4784]: I0123 06:45:06.772146 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2f0e8e43-7110-45ca-8be2-59c0150b3ac4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:45:06 crc kubenswrapper[4784]: I0123 06:45:06.772143 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2f0e8e43-7110-45ca-8be2-59c0150b3ac4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:45:07 crc kubenswrapper[4784]: I0123 06:45:07.276440 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c542d52-d20d-41d2-8b80-fb2a9bf5bafa" path="/var/lib/kubelet/pods/2c542d52-d20d-41d2-8b80-fb2a9bf5bafa/volumes" Jan 23 06:45:07 crc kubenswrapper[4784]: I0123 06:45:07.373616 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4e84d3df-4011-472a-9b95-9ed21dea27d5","Type":"ContainerStarted","Data":"8f8ea897b2fd3969d33cd61727046466040e10f536aa9ed3bc0d81a479272c3f"} Jan 23 06:45:07 crc kubenswrapper[4784]: I0123 06:45:07.373687 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4e84d3df-4011-472a-9b95-9ed21dea27d5","Type":"ContainerStarted","Data":"ddd4be335107822289823eabc502580ff0180e4099bc6d41029d8c1d4ef7e629"} Jan 23 06:45:07 crc kubenswrapper[4784]: I0123 06:45:07.375512 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 06:45:07 crc kubenswrapper[4784]: I0123 06:45:07.398900 4784 generic.go:334] "Generic (PLEG): container finished" podID="2d5166b5-66c2-4450-933f-c66331343200" containerID="92ac8b56ff04f43f9d573109c7f87ad05183bd964a308d6a7ef760f2eba5290c" exitCode=0 Jan 23 06:45:07 crc kubenswrapper[4784]: I0123 06:45:07.398952 4784 generic.go:334] "Generic (PLEG): container finished" podID="2d5166b5-66c2-4450-933f-c66331343200" containerID="2f5aeaf3479941ead773ffced2ab64a3f88e90846903b9220163a292d732add7" exitCode=2 Jan 23 06:45:07 crc kubenswrapper[4784]: I0123 06:45:07.398964 4784 generic.go:334] "Generic (PLEG): container finished" podID="2d5166b5-66c2-4450-933f-c66331343200" containerID="5ddf7d3d2dffe8825eee3d441e6cae1d0e455fff705315494a8e412f89ce4736" exitCode=0 Jan 23 06:45:07 crc kubenswrapper[4784]: I0123 06:45:07.400333 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5166b5-66c2-4450-933f-c66331343200","Type":"ContainerDied","Data":"92ac8b56ff04f43f9d573109c7f87ad05183bd964a308d6a7ef760f2eba5290c"} Jan 23 06:45:07 crc kubenswrapper[4784]: I0123 06:45:07.400391 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5166b5-66c2-4450-933f-c66331343200","Type":"ContainerDied","Data":"2f5aeaf3479941ead773ffced2ab64a3f88e90846903b9220163a292d732add7"} Jan 23 06:45:07 crc kubenswrapper[4784]: I0123 06:45:07.400409 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5166b5-66c2-4450-933f-c66331343200","Type":"ContainerDied","Data":"5ddf7d3d2dffe8825eee3d441e6cae1d0e455fff705315494a8e412f89ce4736"} Jan 23 06:45:07 crc kubenswrapper[4784]: I0123 06:45:07.429281 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.864565901 podStartE2EDuration="2.429249523s" podCreationTimestamp="2026-01-23 06:45:05 +0000 UTC" firstStartedPulling="2026-01-23 06:45:06.342187821 +0000 UTC m=+1509.574695795" lastFinishedPulling="2026-01-23 06:45:06.906871443 +0000 UTC m=+1510.139379417" observedRunningTime="2026-01-23 06:45:07.419945123 +0000 UTC m=+1510.652453107" watchObservedRunningTime="2026-01-23 06:45:07.429249523 +0000 UTC m=+1510.661757497" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.027413 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.197006 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-sg-core-conf-yaml\") pod \"2d5166b5-66c2-4450-933f-c66331343200\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.197080 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5166b5-66c2-4450-933f-c66331343200-log-httpd\") pod \"2d5166b5-66c2-4450-933f-c66331343200\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.197129 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-config-data\") pod \"2d5166b5-66c2-4450-933f-c66331343200\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.197191 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-scripts\") pod \"2d5166b5-66c2-4450-933f-c66331343200\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.197285 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5166b5-66c2-4450-933f-c66331343200-run-httpd\") pod \"2d5166b5-66c2-4450-933f-c66331343200\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.197422 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgb6g\" (UniqueName: \"kubernetes.io/projected/2d5166b5-66c2-4450-933f-c66331343200-kube-api-access-dgb6g\") pod \"2d5166b5-66c2-4450-933f-c66331343200\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.197482 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-combined-ca-bundle\") pod \"2d5166b5-66c2-4450-933f-c66331343200\" (UID: \"2d5166b5-66c2-4450-933f-c66331343200\") " Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.198310 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5166b5-66c2-4450-933f-c66331343200-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2d5166b5-66c2-4450-933f-c66331343200" (UID: "2d5166b5-66c2-4450-933f-c66331343200"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.198345 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5166b5-66c2-4450-933f-c66331343200-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2d5166b5-66c2-4450-933f-c66331343200" (UID: "2d5166b5-66c2-4450-933f-c66331343200"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.200453 4784 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5166b5-66c2-4450-933f-c66331343200-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.200483 4784 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d5166b5-66c2-4450-933f-c66331343200-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.227886 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5166b5-66c2-4450-933f-c66331343200-kube-api-access-dgb6g" (OuterVolumeSpecName: "kube-api-access-dgb6g") pod "2d5166b5-66c2-4450-933f-c66331343200" (UID: "2d5166b5-66c2-4450-933f-c66331343200"). InnerVolumeSpecName "kube-api-access-dgb6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.242310 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-scripts" (OuterVolumeSpecName: "scripts") pod "2d5166b5-66c2-4450-933f-c66331343200" (UID: "2d5166b5-66c2-4450-933f-c66331343200"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.248786 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2d5166b5-66c2-4450-933f-c66331343200" (UID: "2d5166b5-66c2-4450-933f-c66331343200"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.308933 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.308983 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgb6g\" (UniqueName: \"kubernetes.io/projected/2d5166b5-66c2-4450-933f-c66331343200-kube-api-access-dgb6g\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.309017 4784 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.395983 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d5166b5-66c2-4450-933f-c66331343200" (UID: "2d5166b5-66c2-4450-933f-c66331343200"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.396695 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-config-data" (OuterVolumeSpecName: "config-data") pod "2d5166b5-66c2-4450-933f-c66331343200" (UID: "2d5166b5-66c2-4450-933f-c66331343200"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.412290 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.412340 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d5166b5-66c2-4450-933f-c66331343200-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.421329 4784 generic.go:334] "Generic (PLEG): container finished" podID="2d5166b5-66c2-4450-933f-c66331343200" containerID="5f55d7ab420982ca51cb56f93ff9137975fed8432a56e468e127f2464be6ca47" exitCode=0 Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.421830 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.421824 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5166b5-66c2-4450-933f-c66331343200","Type":"ContainerDied","Data":"5f55d7ab420982ca51cb56f93ff9137975fed8432a56e468e127f2464be6ca47"} Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.422123 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d5166b5-66c2-4450-933f-c66331343200","Type":"ContainerDied","Data":"322bfa14852b742da9f2f1728359adb9d9be0d1fbdcc98bfdfb5d1b91725e13c"} Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.422162 4784 scope.go:117] "RemoveContainer" containerID="92ac8b56ff04f43f9d573109c7f87ad05183bd964a308d6a7ef760f2eba5290c" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.479300 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.487274 4784 scope.go:117] "RemoveContainer" containerID="2f5aeaf3479941ead773ffced2ab64a3f88e90846903b9220163a292d732add7" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.504218 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.536274 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.537679 4784 scope.go:117] "RemoveContainer" containerID="5f55d7ab420982ca51cb56f93ff9137975fed8432a56e468e127f2464be6ca47" Jan 23 06:45:08 crc kubenswrapper[4784]: E0123 06:45:08.538125 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="ceilometer-central-agent" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.538167 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="ceilometer-central-agent" Jan 23 06:45:08 crc kubenswrapper[4784]: E0123 06:45:08.538185 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="ceilometer-notification-agent" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.538192 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="ceilometer-notification-agent" Jan 23 06:45:08 crc kubenswrapper[4784]: E0123 06:45:08.538218 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="sg-core" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.538225 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="sg-core" Jan 23 06:45:08 crc kubenswrapper[4784]: E0123 06:45:08.538252 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="proxy-httpd" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.538259 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="proxy-httpd" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.538528 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="ceilometer-notification-agent" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.538560 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="proxy-httpd" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.538572 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="sg-core" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.538589 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5166b5-66c2-4450-933f-c66331343200" containerName="ceilometer-central-agent" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.545073 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.548561 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.548819 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.548887 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.581092 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.581573 4784 scope.go:117] "RemoveContainer" containerID="5ddf7d3d2dffe8825eee3d441e6cae1d0e455fff705315494a8e412f89ce4736" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.622482 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-config-data\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.622856 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-scripts\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.622910 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.622974 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6kfd\" (UniqueName: \"kubernetes.io/projected/3e7692b8-7b60-4294-b6e5-7e2145383b4e-kube-api-access-v6kfd\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.623009 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e7692b8-7b60-4294-b6e5-7e2145383b4e-run-httpd\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.623071 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.623089 4784 scope.go:117] "RemoveContainer" containerID="92ac8b56ff04f43f9d573109c7f87ad05183bd964a308d6a7ef760f2eba5290c" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.623100 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.623407 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e7692b8-7b60-4294-b6e5-7e2145383b4e-log-httpd\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: E0123 06:45:08.624351 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92ac8b56ff04f43f9d573109c7f87ad05183bd964a308d6a7ef760f2eba5290c\": container with ID starting with 92ac8b56ff04f43f9d573109c7f87ad05183bd964a308d6a7ef760f2eba5290c not found: ID does not exist" containerID="92ac8b56ff04f43f9d573109c7f87ad05183bd964a308d6a7ef760f2eba5290c" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.624476 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ac8b56ff04f43f9d573109c7f87ad05183bd964a308d6a7ef760f2eba5290c"} err="failed to get container status \"92ac8b56ff04f43f9d573109c7f87ad05183bd964a308d6a7ef760f2eba5290c\": rpc error: code = NotFound desc = could not find container \"92ac8b56ff04f43f9d573109c7f87ad05183bd964a308d6a7ef760f2eba5290c\": container with ID starting with 92ac8b56ff04f43f9d573109c7f87ad05183bd964a308d6a7ef760f2eba5290c not found: ID does not exist" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.624588 4784 scope.go:117] "RemoveContainer" containerID="2f5aeaf3479941ead773ffced2ab64a3f88e90846903b9220163a292d732add7" Jan 23 06:45:08 crc kubenswrapper[4784]: E0123 06:45:08.625207 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5aeaf3479941ead773ffced2ab64a3f88e90846903b9220163a292d732add7\": container with ID starting with 2f5aeaf3479941ead773ffced2ab64a3f88e90846903b9220163a292d732add7 not found: ID does not exist" containerID="2f5aeaf3479941ead773ffced2ab64a3f88e90846903b9220163a292d732add7" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.625320 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5aeaf3479941ead773ffced2ab64a3f88e90846903b9220163a292d732add7"} err="failed to get container status \"2f5aeaf3479941ead773ffced2ab64a3f88e90846903b9220163a292d732add7\": rpc error: code = NotFound desc = could not find container \"2f5aeaf3479941ead773ffced2ab64a3f88e90846903b9220163a292d732add7\": container with ID starting with 2f5aeaf3479941ead773ffced2ab64a3f88e90846903b9220163a292d732add7 not found: ID does not exist" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.625496 4784 scope.go:117] "RemoveContainer" containerID="5f55d7ab420982ca51cb56f93ff9137975fed8432a56e468e127f2464be6ca47" Jan 23 06:45:08 crc kubenswrapper[4784]: E0123 06:45:08.630040 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f55d7ab420982ca51cb56f93ff9137975fed8432a56e468e127f2464be6ca47\": container with ID starting with 5f55d7ab420982ca51cb56f93ff9137975fed8432a56e468e127f2464be6ca47 not found: ID does not exist" containerID="5f55d7ab420982ca51cb56f93ff9137975fed8432a56e468e127f2464be6ca47" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.630123 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f55d7ab420982ca51cb56f93ff9137975fed8432a56e468e127f2464be6ca47"} err="failed to get container status \"5f55d7ab420982ca51cb56f93ff9137975fed8432a56e468e127f2464be6ca47\": rpc error: code = NotFound desc = could not find container \"5f55d7ab420982ca51cb56f93ff9137975fed8432a56e468e127f2464be6ca47\": container with ID starting with 5f55d7ab420982ca51cb56f93ff9137975fed8432a56e468e127f2464be6ca47 not found: ID does not exist" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.630168 4784 scope.go:117] "RemoveContainer" containerID="5ddf7d3d2dffe8825eee3d441e6cae1d0e455fff705315494a8e412f89ce4736" Jan 23 06:45:08 crc kubenswrapper[4784]: E0123 06:45:08.634364 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ddf7d3d2dffe8825eee3d441e6cae1d0e455fff705315494a8e412f89ce4736\": container with ID starting with 5ddf7d3d2dffe8825eee3d441e6cae1d0e455fff705315494a8e412f89ce4736 not found: ID does not exist" containerID="5ddf7d3d2dffe8825eee3d441e6cae1d0e455fff705315494a8e412f89ce4736" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.634434 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ddf7d3d2dffe8825eee3d441e6cae1d0e455fff705315494a8e412f89ce4736"} err="failed to get container status \"5ddf7d3d2dffe8825eee3d441e6cae1d0e455fff705315494a8e412f89ce4736\": rpc error: code = NotFound desc = could not find container \"5ddf7d3d2dffe8825eee3d441e6cae1d0e455fff705315494a8e412f89ce4736\": container with ID starting with 5ddf7d3d2dffe8825eee3d441e6cae1d0e455fff705315494a8e412f89ce4736 not found: ID does not exist" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.728769 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-config-data\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.728928 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-scripts\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.729003 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.729095 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6kfd\" (UniqueName: \"kubernetes.io/projected/3e7692b8-7b60-4294-b6e5-7e2145383b4e-kube-api-access-v6kfd\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.729143 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e7692b8-7b60-4294-b6e5-7e2145383b4e-run-httpd\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.729659 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.729710 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.729804 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e7692b8-7b60-4294-b6e5-7e2145383b4e-log-httpd\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.730136 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e7692b8-7b60-4294-b6e5-7e2145383b4e-run-httpd\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.736600 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e7692b8-7b60-4294-b6e5-7e2145383b4e-log-httpd\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.738140 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-config-data\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.739553 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.743367 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.744998 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-scripts\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.748895 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.760746 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6kfd\" (UniqueName: \"kubernetes.io/projected/3e7692b8-7b60-4294-b6e5-7e2145383b4e-kube-api-access-v6kfd\") pod \"ceilometer-0\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " pod="openstack/ceilometer-0" Jan 23 06:45:08 crc kubenswrapper[4784]: I0123 06:45:08.896615 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:45:09 crc kubenswrapper[4784]: I0123 06:45:09.275371 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d5166b5-66c2-4450-933f-c66331343200" path="/var/lib/kubelet/pods/2d5166b5-66c2-4450-933f-c66331343200/volumes" Jan 23 06:45:09 crc kubenswrapper[4784]: I0123 06:45:09.415337 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:09 crc kubenswrapper[4784]: I0123 06:45:09.432932 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e7692b8-7b60-4294-b6e5-7e2145383b4e","Type":"ContainerStarted","Data":"0475eb91bce43e47981419b51b27a202f8600fed919ef7bd96c1904fceda02d0"} Jan 23 06:45:11 crc kubenswrapper[4784]: I0123 06:45:11.463959 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e7692b8-7b60-4294-b6e5-7e2145383b4e","Type":"ContainerStarted","Data":"67fc4a316402969be9f35dba3cfe456d8ab0eccee6b6388764daf51caa0b3317"} Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.500259 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e7692b8-7b60-4294-b6e5-7e2145383b4e","Type":"ContainerStarted","Data":"b1ca7390d149cca589016676eb04d2ae16a95de9fe49795e8f8b06dd45da115a"} Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.501038 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e7692b8-7b60-4294-b6e5-7e2145383b4e","Type":"ContainerStarted","Data":"0c2c23c62e67b003e0d55a9bd487e9b689074c82e644c20b00893fcf6e7247ad"} Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.505433 4784 generic.go:334] "Generic (PLEG): container finished" podID="0ed163b1-1994-463b-a9c7-90ce7e097713" containerID="57bc54fd5f50df24b6bcf0537135868ac1dfb7f465f709dada495e033b7f278f" exitCode=137 Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.505542 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ed163b1-1994-463b-a9c7-90ce7e097713","Type":"ContainerDied","Data":"57bc54fd5f50df24b6bcf0537135868ac1dfb7f465f709dada495e033b7f278f"} Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.511629 4784 generic.go:334] "Generic (PLEG): container finished" podID="e1f57947-a8f3-4250-8d6b-9197d5a293b2" containerID="5e926b8bd80471188814e5a1400c0c8285188f77b62eddf099065fdb13eac7c3" exitCode=137 Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.511717 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1f57947-a8f3-4250-8d6b-9197d5a293b2","Type":"ContainerDied","Data":"5e926b8bd80471188814e5a1400c0c8285188f77b62eddf099065fdb13eac7c3"} Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.511782 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1f57947-a8f3-4250-8d6b-9197d5a293b2","Type":"ContainerDied","Data":"671d74b6bf8cf93e9e16c2c59f316020b7e08dc12c8a361ea215645eb177aebc"} Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.511796 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="671d74b6bf8cf93e9e16c2c59f316020b7e08dc12c8a361ea215645eb177aebc" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.604000 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.610883 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.685410 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv7gt\" (UniqueName: \"kubernetes.io/projected/e1f57947-a8f3-4250-8d6b-9197d5a293b2-kube-api-access-qv7gt\") pod \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.685569 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x4l2\" (UniqueName: \"kubernetes.io/projected/0ed163b1-1994-463b-a9c7-90ce7e097713-kube-api-access-5x4l2\") pod \"0ed163b1-1994-463b-a9c7-90ce7e097713\" (UID: \"0ed163b1-1994-463b-a9c7-90ce7e097713\") " Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.685782 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f57947-a8f3-4250-8d6b-9197d5a293b2-combined-ca-bundle\") pod \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.685881 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed163b1-1994-463b-a9c7-90ce7e097713-config-data\") pod \"0ed163b1-1994-463b-a9c7-90ce7e097713\" (UID: \"0ed163b1-1994-463b-a9c7-90ce7e097713\") " Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.686016 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed163b1-1994-463b-a9c7-90ce7e097713-combined-ca-bundle\") pod \"0ed163b1-1994-463b-a9c7-90ce7e097713\" (UID: \"0ed163b1-1994-463b-a9c7-90ce7e097713\") " Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.686036 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f57947-a8f3-4250-8d6b-9197d5a293b2-config-data\") pod \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.686066 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1f57947-a8f3-4250-8d6b-9197d5a293b2-logs\") pod \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\" (UID: \"e1f57947-a8f3-4250-8d6b-9197d5a293b2\") " Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.687377 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1f57947-a8f3-4250-8d6b-9197d5a293b2-logs" (OuterVolumeSpecName: "logs") pod "e1f57947-a8f3-4250-8d6b-9197d5a293b2" (UID: "e1f57947-a8f3-4250-8d6b-9197d5a293b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.694744 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ed163b1-1994-463b-a9c7-90ce7e097713-kube-api-access-5x4l2" (OuterVolumeSpecName: "kube-api-access-5x4l2") pod "0ed163b1-1994-463b-a9c7-90ce7e097713" (UID: "0ed163b1-1994-463b-a9c7-90ce7e097713"). InnerVolumeSpecName "kube-api-access-5x4l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.696931 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f57947-a8f3-4250-8d6b-9197d5a293b2-kube-api-access-qv7gt" (OuterVolumeSpecName: "kube-api-access-qv7gt") pod "e1f57947-a8f3-4250-8d6b-9197d5a293b2" (UID: "e1f57947-a8f3-4250-8d6b-9197d5a293b2"). InnerVolumeSpecName "kube-api-access-qv7gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.721093 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed163b1-1994-463b-a9c7-90ce7e097713-config-data" (OuterVolumeSpecName: "config-data") pod "0ed163b1-1994-463b-a9c7-90ce7e097713" (UID: "0ed163b1-1994-463b-a9c7-90ce7e097713"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.722772 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f57947-a8f3-4250-8d6b-9197d5a293b2-config-data" (OuterVolumeSpecName: "config-data") pod "e1f57947-a8f3-4250-8d6b-9197d5a293b2" (UID: "e1f57947-a8f3-4250-8d6b-9197d5a293b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.725633 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f57947-a8f3-4250-8d6b-9197d5a293b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1f57947-a8f3-4250-8d6b-9197d5a293b2" (UID: "e1f57947-a8f3-4250-8d6b-9197d5a293b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.725648 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed163b1-1994-463b-a9c7-90ce7e097713-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ed163b1-1994-463b-a9c7-90ce7e097713" (UID: "0ed163b1-1994-463b-a9c7-90ce7e097713"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.788729 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f57947-a8f3-4250-8d6b-9197d5a293b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.788797 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ed163b1-1994-463b-a9c7-90ce7e097713-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.788807 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ed163b1-1994-463b-a9c7-90ce7e097713-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.788816 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f57947-a8f3-4250-8d6b-9197d5a293b2-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.788826 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1f57947-a8f3-4250-8d6b-9197d5a293b2-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.788835 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv7gt\" (UniqueName: \"kubernetes.io/projected/e1f57947-a8f3-4250-8d6b-9197d5a293b2-kube-api-access-qv7gt\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:12 crc kubenswrapper[4784]: I0123 06:45:12.788849 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x4l2\" (UniqueName: \"kubernetes.io/projected/0ed163b1-1994-463b-a9c7-90ce7e097713-kube-api-access-5x4l2\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.530176 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ed163b1-1994-463b-a9c7-90ce7e097713","Type":"ContainerDied","Data":"4815907179e6d3e3b7ae99f123e078a5dc3585b23c37626f30dbce10202c3f94"} Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.530215 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.530244 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.530768 4784 scope.go:117] "RemoveContainer" containerID="57bc54fd5f50df24b6bcf0537135868ac1dfb7f465f709dada495e033b7f278f" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.580154 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.594853 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.606979 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.619734 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.636959 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:45:13 crc kubenswrapper[4784]: E0123 06:45:13.644574 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ed163b1-1994-463b-a9c7-90ce7e097713" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.644663 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ed163b1-1994-463b-a9c7-90ce7e097713" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 06:45:13 crc kubenswrapper[4784]: E0123 06:45:13.644854 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f57947-a8f3-4250-8d6b-9197d5a293b2" containerName="nova-metadata-metadata" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.644870 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f57947-a8f3-4250-8d6b-9197d5a293b2" containerName="nova-metadata-metadata" Jan 23 06:45:13 crc kubenswrapper[4784]: E0123 06:45:13.644890 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f57947-a8f3-4250-8d6b-9197d5a293b2" containerName="nova-metadata-log" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.644902 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f57947-a8f3-4250-8d6b-9197d5a293b2" containerName="nova-metadata-log" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.645410 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f57947-a8f3-4250-8d6b-9197d5a293b2" containerName="nova-metadata-log" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.645456 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ed163b1-1994-463b-a9c7-90ce7e097713" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.645469 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f57947-a8f3-4250-8d6b-9197d5a293b2" containerName="nova-metadata-metadata" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.646512 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.654832 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.659128 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.660575 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.660918 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.661066 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.672193 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.672569 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.676834 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.694778 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.712185 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-config-data\") pod \"nova-metadata-0\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.712269 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871d6e64-73c9-4a77-8bae-8c96cad28acb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"871d6e64-73c9-4a77-8bae-8c96cad28acb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.712334 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt2v9\" (UniqueName: \"kubernetes.io/projected/871d6e64-73c9-4a77-8bae-8c96cad28acb-kube-api-access-pt2v9\") pod \"nova-cell1-novncproxy-0\" (UID: \"871d6e64-73c9-4a77-8bae-8c96cad28acb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.712650 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.712781 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.712938 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871d6e64-73c9-4a77-8bae-8c96cad28acb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"871d6e64-73c9-4a77-8bae-8c96cad28acb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.712991 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/871d6e64-73c9-4a77-8bae-8c96cad28acb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"871d6e64-73c9-4a77-8bae-8c96cad28acb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.713113 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-logs\") pod \"nova-metadata-0\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.713203 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/871d6e64-73c9-4a77-8bae-8c96cad28acb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"871d6e64-73c9-4a77-8bae-8c96cad28acb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.713254 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcgzf\" (UniqueName: \"kubernetes.io/projected/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-kube-api-access-fcgzf\") pod \"nova-metadata-0\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.815780 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-config-data\") pod \"nova-metadata-0\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.816111 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871d6e64-73c9-4a77-8bae-8c96cad28acb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"871d6e64-73c9-4a77-8bae-8c96cad28acb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.816294 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt2v9\" (UniqueName: \"kubernetes.io/projected/871d6e64-73c9-4a77-8bae-8c96cad28acb-kube-api-access-pt2v9\") pod \"nova-cell1-novncproxy-0\" (UID: \"871d6e64-73c9-4a77-8bae-8c96cad28acb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.816439 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.816547 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.816716 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871d6e64-73c9-4a77-8bae-8c96cad28acb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"871d6e64-73c9-4a77-8bae-8c96cad28acb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.816863 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/871d6e64-73c9-4a77-8bae-8c96cad28acb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"871d6e64-73c9-4a77-8bae-8c96cad28acb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.817384 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-logs\") pod \"nova-metadata-0\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.817529 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/871d6e64-73c9-4a77-8bae-8c96cad28acb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"871d6e64-73c9-4a77-8bae-8c96cad28acb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.817631 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcgzf\" (UniqueName: \"kubernetes.io/projected/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-kube-api-access-fcgzf\") pod \"nova-metadata-0\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.818377 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-logs\") pod \"nova-metadata-0\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.822987 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/871d6e64-73c9-4a77-8bae-8c96cad28acb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"871d6e64-73c9-4a77-8bae-8c96cad28acb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.823007 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.823014 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-config-data\") pod \"nova-metadata-0\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.825446 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/871d6e64-73c9-4a77-8bae-8c96cad28acb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"871d6e64-73c9-4a77-8bae-8c96cad28acb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.826685 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871d6e64-73c9-4a77-8bae-8c96cad28acb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"871d6e64-73c9-4a77-8bae-8c96cad28acb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.828737 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871d6e64-73c9-4a77-8bae-8c96cad28acb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"871d6e64-73c9-4a77-8bae-8c96cad28acb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.829537 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.842650 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcgzf\" (UniqueName: \"kubernetes.io/projected/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-kube-api-access-fcgzf\") pod \"nova-metadata-0\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " pod="openstack/nova-metadata-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.845300 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt2v9\" (UniqueName: \"kubernetes.io/projected/871d6e64-73c9-4a77-8bae-8c96cad28acb-kube-api-access-pt2v9\") pod \"nova-cell1-novncproxy-0\" (UID: \"871d6e64-73c9-4a77-8bae-8c96cad28acb\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:13 crc kubenswrapper[4784]: I0123 06:45:13.993322 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:14 crc kubenswrapper[4784]: I0123 06:45:14.009045 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:14.548970 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e7692b8-7b60-4294-b6e5-7e2145383b4e","Type":"ContainerStarted","Data":"67325ba2851a804b4b1fcacf794035787d0e78661b9a5111b128717b6d18ed5d"} Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:14.550839 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:14.595652 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.053469424 podStartE2EDuration="6.59559176s" podCreationTimestamp="2026-01-23 06:45:08 +0000 UTC" firstStartedPulling="2026-01-23 06:45:09.421806474 +0000 UTC m=+1512.654314448" lastFinishedPulling="2026-01-23 06:45:13.96392881 +0000 UTC m=+1517.196436784" observedRunningTime="2026-01-23 06:45:14.584318463 +0000 UTC m=+1517.816826457" watchObservedRunningTime="2026-01-23 06:45:14.59559176 +0000 UTC m=+1517.828099734" Jan 23 06:45:15 crc kubenswrapper[4784]: W0123 06:45:14.649186 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod480d35f1_9e5d_4c9b_bdab_fd9531bf794a.slice/crio-1bf1e6b84c39e42cb4836cf55debb3edc31794da2034c7fe78f3ff4e7fe87b8f WatchSource:0}: Error finding container 1bf1e6b84c39e42cb4836cf55debb3edc31794da2034c7fe78f3ff4e7fe87b8f: Status 404 returned error can't find the container with id 1bf1e6b84c39e42cb4836cf55debb3edc31794da2034c7fe78f3ff4e7fe87b8f Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:14.660348 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:14.728290 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:15.284687 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ed163b1-1994-463b-a9c7-90ce7e097713" path="/var/lib/kubelet/pods/0ed163b1-1994-463b-a9c7-90ce7e097713/volumes" Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:15.285869 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f57947-a8f3-4250-8d6b-9197d5a293b2" path="/var/lib/kubelet/pods/e1f57947-a8f3-4250-8d6b-9197d5a293b2/volumes" Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:15.565387 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"871d6e64-73c9-4a77-8bae-8c96cad28acb","Type":"ContainerStarted","Data":"1074687aa30c1e63917748f83d0ffc1ea9866794053b2d31173ea9397c0b4825"} Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:15.565464 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"871d6e64-73c9-4a77-8bae-8c96cad28acb","Type":"ContainerStarted","Data":"2de6f3955f609595bef9d4a50ceb113ec443ffa5520872e5efb83e36527ee969"} Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:15.572413 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"480d35f1-9e5d-4c9b-bdab-fd9531bf794a","Type":"ContainerStarted","Data":"608de97c178807991db1e18a6baacbfc425d7344e764e0b3c94aaaf73716265a"} Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:15.572467 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"480d35f1-9e5d-4c9b-bdab-fd9531bf794a","Type":"ContainerStarted","Data":"8fc8cdc39d7210f398376f06a3569ff42b1b826325a0f5b52bed52dd13895968"} Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:15.572478 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"480d35f1-9e5d-4c9b-bdab-fd9531bf794a","Type":"ContainerStarted","Data":"1bf1e6b84c39e42cb4836cf55debb3edc31794da2034c7fe78f3ff4e7fe87b8f"} Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:15.602925 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.602897368 podStartE2EDuration="2.602897368s" podCreationTimestamp="2026-01-23 06:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:45:15.587042367 +0000 UTC m=+1518.819550341" watchObservedRunningTime="2026-01-23 06:45:15.602897368 +0000 UTC m=+1518.835405342" Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:15.615555 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.615524029 podStartE2EDuration="2.615524029s" podCreationTimestamp="2026-01-23 06:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:45:15.612289329 +0000 UTC m=+1518.844797313" watchObservedRunningTime="2026-01-23 06:45:15.615524029 +0000 UTC m=+1518.848032003" Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:15.692167 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:15.692807 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:15.698025 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:15.698145 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 06:45:15 crc kubenswrapper[4784]: I0123 06:45:15.763498 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 06:45:16 crc kubenswrapper[4784]: I0123 06:45:16.588181 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 06:45:16 crc kubenswrapper[4784]: I0123 06:45:16.597311 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 06:45:16 crc kubenswrapper[4784]: I0123 06:45:16.850667 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-q5j59"] Jan 23 06:45:16 crc kubenswrapper[4784]: I0123 06:45:16.876271 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:16 crc kubenswrapper[4784]: I0123 06:45:16.921931 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-q5j59"] Jan 23 06:45:16 crc kubenswrapper[4784]: I0123 06:45:16.939007 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:16 crc kubenswrapper[4784]: I0123 06:45:16.950486 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:16 crc kubenswrapper[4784]: I0123 06:45:16.950654 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4qgk\" (UniqueName: \"kubernetes.io/projected/e6c0eaf9-bfa3-491c-a219-6450089b378e-kube-api-access-k4qgk\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:16 crc kubenswrapper[4784]: I0123 06:45:16.952676 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:16 crc kubenswrapper[4784]: I0123 06:45:16.952782 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-config\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:16 crc kubenswrapper[4784]: I0123 06:45:16.952871 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:17 crc kubenswrapper[4784]: I0123 06:45:17.058406 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:17 crc kubenswrapper[4784]: I0123 06:45:17.058482 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:17 crc kubenswrapper[4784]: I0123 06:45:17.058521 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4qgk\" (UniqueName: \"kubernetes.io/projected/e6c0eaf9-bfa3-491c-a219-6450089b378e-kube-api-access-k4qgk\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:17 crc kubenswrapper[4784]: I0123 06:45:17.058787 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:17 crc kubenswrapper[4784]: I0123 06:45:17.058819 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-config\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:17 crc kubenswrapper[4784]: I0123 06:45:17.058853 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:17 crc kubenswrapper[4784]: I0123 06:45:17.060851 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:17 crc kubenswrapper[4784]: I0123 06:45:17.064480 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:17 crc kubenswrapper[4784]: I0123 06:45:17.065709 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:17 crc kubenswrapper[4784]: I0123 06:45:17.067898 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:17 crc kubenswrapper[4784]: I0123 06:45:17.068788 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-config\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:17 crc kubenswrapper[4784]: I0123 06:45:17.130027 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4qgk\" (UniqueName: \"kubernetes.io/projected/e6c0eaf9-bfa3-491c-a219-6450089b378e-kube-api-access-k4qgk\") pod \"dnsmasq-dns-89c5cd4d5-q5j59\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:17 crc kubenswrapper[4784]: I0123 06:45:17.259011 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:17 crc kubenswrapper[4784]: I0123 06:45:17.943158 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-q5j59"] Jan 23 06:45:18 crc kubenswrapper[4784]: I0123 06:45:18.621420 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" event={"ID":"e6c0eaf9-bfa3-491c-a219-6450089b378e","Type":"ContainerStarted","Data":"ead164944e353114b4f88fe5edf81095d6745bd3f41c867d172bce3f27665a66"} Jan 23 06:45:18 crc kubenswrapper[4784]: I0123 06:45:18.993719 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:19 crc kubenswrapper[4784]: I0123 06:45:19.009665 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 06:45:19 crc kubenswrapper[4784]: I0123 06:45:19.009743 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 06:45:19 crc kubenswrapper[4784]: I0123 06:45:19.635028 4784 generic.go:334] "Generic (PLEG): container finished" podID="e6c0eaf9-bfa3-491c-a219-6450089b378e" containerID="78cf49542c186cc114b93c127adca1a3480906f0462ed0f3efb59e6be57ef153" exitCode=0 Jan 23 06:45:19 crc kubenswrapper[4784]: I0123 06:45:19.635132 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" event={"ID":"e6c0eaf9-bfa3-491c-a219-6450089b378e","Type":"ContainerDied","Data":"78cf49542c186cc114b93c127adca1a3480906f0462ed0f3efb59e6be57ef153"} Jan 23 06:45:19 crc kubenswrapper[4784]: I0123 06:45:19.828340 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:45:19 crc kubenswrapper[4784]: I0123 06:45:19.829093 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2f0e8e43-7110-45ca-8be2-59c0150b3ac4" containerName="nova-api-log" containerID="cri-o://5089493af2e07bbfacb94c48a96faebacddaa28ef9d516b93e8f7fc1aa03a30a" gracePeriod=30 Jan 23 06:45:19 crc kubenswrapper[4784]: I0123 06:45:19.829108 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2f0e8e43-7110-45ca-8be2-59c0150b3ac4" containerName="nova-api-api" containerID="cri-o://7fc12651415f5cab47af875599e9b9d5d591e3ab4ea60af764c712632dfe60a9" gracePeriod=30 Jan 23 06:45:20 crc kubenswrapper[4784]: I0123 06:45:20.653398 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" event={"ID":"e6c0eaf9-bfa3-491c-a219-6450089b378e","Type":"ContainerStarted","Data":"3f8542c3412e5fae107d8041e6b6112a777575f90fd87e7c5c3b5b9f574993a6"} Jan 23 06:45:20 crc kubenswrapper[4784]: I0123 06:45:20.653973 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:20 crc kubenswrapper[4784]: I0123 06:45:20.656897 4784 generic.go:334] "Generic (PLEG): container finished" podID="2f0e8e43-7110-45ca-8be2-59c0150b3ac4" containerID="5089493af2e07bbfacb94c48a96faebacddaa28ef9d516b93e8f7fc1aa03a30a" exitCode=143 Jan 23 06:45:20 crc kubenswrapper[4784]: I0123 06:45:20.656942 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f0e8e43-7110-45ca-8be2-59c0150b3ac4","Type":"ContainerDied","Data":"5089493af2e07bbfacb94c48a96faebacddaa28ef9d516b93e8f7fc1aa03a30a"} Jan 23 06:45:20 crc kubenswrapper[4784]: I0123 06:45:20.684766 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" podStartSLOduration=4.68473657 podStartE2EDuration="4.68473657s" podCreationTimestamp="2026-01-23 06:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:45:20.678931997 +0000 UTC m=+1523.911439971" watchObservedRunningTime="2026-01-23 06:45:20.68473657 +0000 UTC m=+1523.917244544" Jan 23 06:45:21 crc kubenswrapper[4784]: I0123 06:45:21.558445 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:21 crc kubenswrapper[4784]: I0123 06:45:21.559816 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="ceilometer-central-agent" containerID="cri-o://67fc4a316402969be9f35dba3cfe456d8ab0eccee6b6388764daf51caa0b3317" gracePeriod=30 Jan 23 06:45:21 crc kubenswrapper[4784]: I0123 06:45:21.559928 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="proxy-httpd" containerID="cri-o://67325ba2851a804b4b1fcacf794035787d0e78661b9a5111b128717b6d18ed5d" gracePeriod=30 Jan 23 06:45:21 crc kubenswrapper[4784]: I0123 06:45:21.559971 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="ceilometer-notification-agent" containerID="cri-o://0c2c23c62e67b003e0d55a9bd487e9b689074c82e644c20b00893fcf6e7247ad" gracePeriod=30 Jan 23 06:45:21 crc kubenswrapper[4784]: I0123 06:45:21.559914 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="sg-core" containerID="cri-o://b1ca7390d149cca589016676eb04d2ae16a95de9fe49795e8f8b06dd45da115a" gracePeriod=30 Jan 23 06:45:22 crc kubenswrapper[4784]: I0123 06:45:22.695134 4784 generic.go:334] "Generic (PLEG): container finished" podID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerID="67325ba2851a804b4b1fcacf794035787d0e78661b9a5111b128717b6d18ed5d" exitCode=0 Jan 23 06:45:22 crc kubenswrapper[4784]: I0123 06:45:22.695679 4784 generic.go:334] "Generic (PLEG): container finished" podID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerID="b1ca7390d149cca589016676eb04d2ae16a95de9fe49795e8f8b06dd45da115a" exitCode=2 Jan 23 06:45:22 crc kubenswrapper[4784]: I0123 06:45:22.695688 4784 generic.go:334] "Generic (PLEG): container finished" podID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerID="0c2c23c62e67b003e0d55a9bd487e9b689074c82e644c20b00893fcf6e7247ad" exitCode=0 Jan 23 06:45:22 crc kubenswrapper[4784]: I0123 06:45:22.695701 4784 generic.go:334] "Generic (PLEG): container finished" podID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerID="67fc4a316402969be9f35dba3cfe456d8ab0eccee6b6388764daf51caa0b3317" exitCode=0 Jan 23 06:45:22 crc kubenswrapper[4784]: I0123 06:45:22.695217 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e7692b8-7b60-4294-b6e5-7e2145383b4e","Type":"ContainerDied","Data":"67325ba2851a804b4b1fcacf794035787d0e78661b9a5111b128717b6d18ed5d"} Jan 23 06:45:22 crc kubenswrapper[4784]: I0123 06:45:22.695756 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e7692b8-7b60-4294-b6e5-7e2145383b4e","Type":"ContainerDied","Data":"b1ca7390d149cca589016676eb04d2ae16a95de9fe49795e8f8b06dd45da115a"} Jan 23 06:45:22 crc kubenswrapper[4784]: I0123 06:45:22.695798 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e7692b8-7b60-4294-b6e5-7e2145383b4e","Type":"ContainerDied","Data":"0c2c23c62e67b003e0d55a9bd487e9b689074c82e644c20b00893fcf6e7247ad"} Jan 23 06:45:22 crc kubenswrapper[4784]: I0123 06:45:22.695811 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e7692b8-7b60-4294-b6e5-7e2145383b4e","Type":"ContainerDied","Data":"67fc4a316402969be9f35dba3cfe456d8ab0eccee6b6388764daf51caa0b3317"} Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.281900 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.312331 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-config-data\") pod \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.312407 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-ceilometer-tls-certs\") pod \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.312442 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-combined-ca-bundle\") pod \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.312518 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e7692b8-7b60-4294-b6e5-7e2145383b4e-run-httpd\") pod \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.312547 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e7692b8-7b60-4294-b6e5-7e2145383b4e-log-httpd\") pod \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.312640 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-sg-core-conf-yaml\") pod \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.312691 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-scripts\") pod \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.312785 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6kfd\" (UniqueName: \"kubernetes.io/projected/3e7692b8-7b60-4294-b6e5-7e2145383b4e-kube-api-access-v6kfd\") pod \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.313169 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e7692b8-7b60-4294-b6e5-7e2145383b4e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3e7692b8-7b60-4294-b6e5-7e2145383b4e" (UID: "3e7692b8-7b60-4294-b6e5-7e2145383b4e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.313417 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e7692b8-7b60-4294-b6e5-7e2145383b4e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3e7692b8-7b60-4294-b6e5-7e2145383b4e" (UID: "3e7692b8-7b60-4294-b6e5-7e2145383b4e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.326459 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e7692b8-7b60-4294-b6e5-7e2145383b4e-kube-api-access-v6kfd" (OuterVolumeSpecName: "kube-api-access-v6kfd") pod "3e7692b8-7b60-4294-b6e5-7e2145383b4e" (UID: "3e7692b8-7b60-4294-b6e5-7e2145383b4e"). InnerVolumeSpecName "kube-api-access-v6kfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.339363 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-scripts" (OuterVolumeSpecName: "scripts") pod "3e7692b8-7b60-4294-b6e5-7e2145383b4e" (UID: "3e7692b8-7b60-4294-b6e5-7e2145383b4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.339973 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6kfd\" (UniqueName: \"kubernetes.io/projected/3e7692b8-7b60-4294-b6e5-7e2145383b4e-kube-api-access-v6kfd\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.340000 4784 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e7692b8-7b60-4294-b6e5-7e2145383b4e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.340013 4784 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e7692b8-7b60-4294-b6e5-7e2145383b4e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.340025 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.366254 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3e7692b8-7b60-4294-b6e5-7e2145383b4e" (UID: "3e7692b8-7b60-4294-b6e5-7e2145383b4e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.432638 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e7692b8-7b60-4294-b6e5-7e2145383b4e" (UID: "3e7692b8-7b60-4294-b6e5-7e2145383b4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.447715 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3e7692b8-7b60-4294-b6e5-7e2145383b4e" (UID: "3e7692b8-7b60-4294-b6e5-7e2145383b4e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.451082 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-ceilometer-tls-certs\") pod \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\" (UID: \"3e7692b8-7b60-4294-b6e5-7e2145383b4e\") " Jan 23 06:45:23 crc kubenswrapper[4784]: W0123 06:45:23.451357 4784 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3e7692b8-7b60-4294-b6e5-7e2145383b4e/volumes/kubernetes.io~secret/ceilometer-tls-certs Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.451391 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3e7692b8-7b60-4294-b6e5-7e2145383b4e" (UID: "3e7692b8-7b60-4294-b6e5-7e2145383b4e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.452574 4784 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.452601 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.452614 4784 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.520389 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-config-data" (OuterVolumeSpecName: "config-data") pod "3e7692b8-7b60-4294-b6e5-7e2145383b4e" (UID: "3e7692b8-7b60-4294-b6e5-7e2145383b4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.555624 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7692b8-7b60-4294-b6e5-7e2145383b4e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.680083 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.737171 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e7692b8-7b60-4294-b6e5-7e2145383b4e","Type":"ContainerDied","Data":"0475eb91bce43e47981419b51b27a202f8600fed919ef7bd96c1904fceda02d0"} Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.737264 4784 scope.go:117] "RemoveContainer" containerID="67325ba2851a804b4b1fcacf794035787d0e78661b9a5111b128717b6d18ed5d" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.737323 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.751173 4784 generic.go:334] "Generic (PLEG): container finished" podID="2f0e8e43-7110-45ca-8be2-59c0150b3ac4" containerID="7fc12651415f5cab47af875599e9b9d5d591e3ab4ea60af764c712632dfe60a9" exitCode=0 Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.751240 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f0e8e43-7110-45ca-8be2-59c0150b3ac4","Type":"ContainerDied","Data":"7fc12651415f5cab47af875599e9b9d5d591e3ab4ea60af764c712632dfe60a9"} Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.751279 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f0e8e43-7110-45ca-8be2-59c0150b3ac4","Type":"ContainerDied","Data":"b98ec623869268659d3733ccf4fa347ae348252748adb3f8b6c28f1ba3ec27dd"} Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.751275 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.759905 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-config-data\") pod \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.760064 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvnll\" (UniqueName: \"kubernetes.io/projected/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-kube-api-access-jvnll\") pod \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.760169 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-logs\") pod \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.760289 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-combined-ca-bundle\") pod \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\" (UID: \"2f0e8e43-7110-45ca-8be2-59c0150b3ac4\") " Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.764245 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-logs" (OuterVolumeSpecName: "logs") pod "2f0e8e43-7110-45ca-8be2-59c0150b3ac4" (UID: "2f0e8e43-7110-45ca-8be2-59c0150b3ac4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.797185 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.803339 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-kube-api-access-jvnll" (OuterVolumeSpecName: "kube-api-access-jvnll") pod "2f0e8e43-7110-45ca-8be2-59c0150b3ac4" (UID: "2f0e8e43-7110-45ca-8be2-59c0150b3ac4"). InnerVolumeSpecName "kube-api-access-jvnll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.811063 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.831674 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f0e8e43-7110-45ca-8be2-59c0150b3ac4" (UID: "2f0e8e43-7110-45ca-8be2-59c0150b3ac4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.863655 4784 scope.go:117] "RemoveContainer" containerID="b1ca7390d149cca589016676eb04d2ae16a95de9fe49795e8f8b06dd45da115a" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.876706 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-config-data" (OuterVolumeSpecName: "config-data") pod "2f0e8e43-7110-45ca-8be2-59c0150b3ac4" (UID: "2f0e8e43-7110-45ca-8be2-59c0150b3ac4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.883250 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvnll\" (UniqueName: \"kubernetes.io/projected/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-kube-api-access-jvnll\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.883484 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.883561 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.883629 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0e8e43-7110-45ca-8be2-59c0150b3ac4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.921966 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:23 crc kubenswrapper[4784]: E0123 06:45:23.924306 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="sg-core" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.924410 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="sg-core" Jan 23 06:45:23 crc kubenswrapper[4784]: E0123 06:45:23.924472 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="ceilometer-notification-agent" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.924528 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="ceilometer-notification-agent" Jan 23 06:45:23 crc kubenswrapper[4784]: E0123 06:45:23.924602 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f0e8e43-7110-45ca-8be2-59c0150b3ac4" containerName="nova-api-log" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.924654 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f0e8e43-7110-45ca-8be2-59c0150b3ac4" containerName="nova-api-log" Jan 23 06:45:23 crc kubenswrapper[4784]: E0123 06:45:23.924729 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="proxy-httpd" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.924801 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="proxy-httpd" Jan 23 06:45:23 crc kubenswrapper[4784]: E0123 06:45:23.924880 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f0e8e43-7110-45ca-8be2-59c0150b3ac4" containerName="nova-api-api" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.924931 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f0e8e43-7110-45ca-8be2-59c0150b3ac4" containerName="nova-api-api" Jan 23 06:45:23 crc kubenswrapper[4784]: E0123 06:45:23.924992 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="ceilometer-central-agent" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.925049 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="ceilometer-central-agent" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.925740 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="ceilometer-central-agent" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.925852 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="proxy-httpd" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.925945 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="ceilometer-notification-agent" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.926042 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f0e8e43-7110-45ca-8be2-59c0150b3ac4" containerName="nova-api-api" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.926121 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" containerName="sg-core" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.926189 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f0e8e43-7110-45ca-8be2-59c0150b3ac4" containerName="nova-api-log" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.933245 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.950164 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.951321 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.951473 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.953372 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.966610 4784 scope.go:117] "RemoveContainer" containerID="0c2c23c62e67b003e0d55a9bd487e9b689074c82e644c20b00893fcf6e7247ad" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.987359 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-scripts\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.988050 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.988158 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75257cd0-fb28-4880-aecb-467abbf8010d-log-httpd\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.988365 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-config-data\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.988434 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.988592 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.988649 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq876\" (UniqueName: \"kubernetes.io/projected/75257cd0-fb28-4880-aecb-467abbf8010d-kube-api-access-bq876\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.988691 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75257cd0-fb28-4880-aecb-467abbf8010d-run-httpd\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:23 crc kubenswrapper[4784]: I0123 06:45:23.995145 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.008510 4784 scope.go:117] "RemoveContainer" containerID="67fc4a316402969be9f35dba3cfe456d8ab0eccee6b6388764daf51caa0b3317" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.014853 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.014906 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.046985 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.060399 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:24 crc kubenswrapper[4784]: E0123 06:45:24.061896 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-bq876 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="75257cd0-fb28-4880-aecb-467abbf8010d" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.091288 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-scripts\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.091896 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.091958 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75257cd0-fb28-4880-aecb-467abbf8010d-log-httpd\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.092085 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-config-data\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.092133 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.092234 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.092262 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq876\" (UniqueName: \"kubernetes.io/projected/75257cd0-fb28-4880-aecb-467abbf8010d-kube-api-access-bq876\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.092294 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75257cd0-fb28-4880-aecb-467abbf8010d-run-httpd\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.093063 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75257cd0-fb28-4880-aecb-467abbf8010d-run-httpd\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.104243 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-scripts\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.105356 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75257cd0-fb28-4880-aecb-467abbf8010d-log-httpd\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.106885 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.113824 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-config-data\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.114834 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.115088 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.129027 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq876\" (UniqueName: \"kubernetes.io/projected/75257cd0-fb28-4880-aecb-467abbf8010d-kube-api-access-bq876\") pod \"ceilometer-0\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.270273 4784 scope.go:117] "RemoveContainer" containerID="7fc12651415f5cab47af875599e9b9d5d591e3ab4ea60af764c712632dfe60a9" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.289638 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.320867 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.335083 4784 scope.go:117] "RemoveContainer" containerID="5089493af2e07bbfacb94c48a96faebacddaa28ef9d516b93e8f7fc1aa03a30a" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.344954 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.347483 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.354084 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.354417 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.355047 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.387939 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.412155 4784 scope.go:117] "RemoveContainer" containerID="7fc12651415f5cab47af875599e9b9d5d591e3ab4ea60af764c712632dfe60a9" Jan 23 06:45:24 crc kubenswrapper[4784]: E0123 06:45:24.413583 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fc12651415f5cab47af875599e9b9d5d591e3ab4ea60af764c712632dfe60a9\": container with ID starting with 7fc12651415f5cab47af875599e9b9d5d591e3ab4ea60af764c712632dfe60a9 not found: ID does not exist" containerID="7fc12651415f5cab47af875599e9b9d5d591e3ab4ea60af764c712632dfe60a9" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.413660 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fc12651415f5cab47af875599e9b9d5d591e3ab4ea60af764c712632dfe60a9"} err="failed to get container status \"7fc12651415f5cab47af875599e9b9d5d591e3ab4ea60af764c712632dfe60a9\": rpc error: code = NotFound desc = could not find container \"7fc12651415f5cab47af875599e9b9d5d591e3ab4ea60af764c712632dfe60a9\": container with ID starting with 7fc12651415f5cab47af875599e9b9d5d591e3ab4ea60af764c712632dfe60a9 not found: ID does not exist" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.413703 4784 scope.go:117] "RemoveContainer" containerID="5089493af2e07bbfacb94c48a96faebacddaa28ef9d516b93e8f7fc1aa03a30a" Jan 23 06:45:24 crc kubenswrapper[4784]: E0123 06:45:24.414268 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5089493af2e07bbfacb94c48a96faebacddaa28ef9d516b93e8f7fc1aa03a30a\": container with ID starting with 5089493af2e07bbfacb94c48a96faebacddaa28ef9d516b93e8f7fc1aa03a30a not found: ID does not exist" containerID="5089493af2e07bbfacb94c48a96faebacddaa28ef9d516b93e8f7fc1aa03a30a" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.414328 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5089493af2e07bbfacb94c48a96faebacddaa28ef9d516b93e8f7fc1aa03a30a"} err="failed to get container status \"5089493af2e07bbfacb94c48a96faebacddaa28ef9d516b93e8f7fc1aa03a30a\": rpc error: code = NotFound desc = could not find container \"5089493af2e07bbfacb94c48a96faebacddaa28ef9d516b93e8f7fc1aa03a30a\": container with ID starting with 5089493af2e07bbfacb94c48a96faebacddaa28ef9d516b93e8f7fc1aa03a30a not found: ID does not exist" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.503752 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-config-data\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.504013 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.504345 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk8d4\" (UniqueName: \"kubernetes.io/projected/ff246545-b372-434e-b61e-51c674848d39-kube-api-access-fk8d4\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.504517 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff246545-b372-434e-b61e-51c674848d39-logs\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.504702 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.505122 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.607266 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.607387 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-config-data\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.607433 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.607522 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk8d4\" (UniqueName: \"kubernetes.io/projected/ff246545-b372-434e-b61e-51c674848d39-kube-api-access-fk8d4\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.607554 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff246545-b372-434e-b61e-51c674848d39-logs\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.607592 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.608515 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff246545-b372-434e-b61e-51c674848d39-logs\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.612451 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.612499 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.614533 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-public-tls-certs\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.614867 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-config-data\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.628049 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk8d4\" (UniqueName: \"kubernetes.io/projected/ff246545-b372-434e-b61e-51c674848d39-kube-api-access-fk8d4\") pod \"nova-api-0\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.722463 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.775260 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.797733 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.804153 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.913875 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75257cd0-fb28-4880-aecb-467abbf8010d-run-httpd\") pod \"75257cd0-fb28-4880-aecb-467abbf8010d\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.914114 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-config-data\") pod \"75257cd0-fb28-4880-aecb-467abbf8010d\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.914145 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-ceilometer-tls-certs\") pod \"75257cd0-fb28-4880-aecb-467abbf8010d\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.914220 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-scripts\") pod \"75257cd0-fb28-4880-aecb-467abbf8010d\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.914314 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq876\" (UniqueName: \"kubernetes.io/projected/75257cd0-fb28-4880-aecb-467abbf8010d-kube-api-access-bq876\") pod \"75257cd0-fb28-4880-aecb-467abbf8010d\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.914453 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75257cd0-fb28-4880-aecb-467abbf8010d-log-httpd\") pod \"75257cd0-fb28-4880-aecb-467abbf8010d\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.914488 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-combined-ca-bundle\") pod \"75257cd0-fb28-4880-aecb-467abbf8010d\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.915080 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-sg-core-conf-yaml\") pod \"75257cd0-fb28-4880-aecb-467abbf8010d\" (UID: \"75257cd0-fb28-4880-aecb-467abbf8010d\") " Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.920123 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75257cd0-fb28-4880-aecb-467abbf8010d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "75257cd0-fb28-4880-aecb-467abbf8010d" (UID: "75257cd0-fb28-4880-aecb-467abbf8010d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.923496 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75257cd0-fb28-4880-aecb-467abbf8010d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "75257cd0-fb28-4880-aecb-467abbf8010d" (UID: "75257cd0-fb28-4880-aecb-467abbf8010d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.932197 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "75257cd0-fb28-4880-aecb-467abbf8010d" (UID: "75257cd0-fb28-4880-aecb-467abbf8010d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.935303 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-config-data" (OuterVolumeSpecName: "config-data") pod "75257cd0-fb28-4880-aecb-467abbf8010d" (UID: "75257cd0-fb28-4880-aecb-467abbf8010d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.935947 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75257cd0-fb28-4880-aecb-467abbf8010d-kube-api-access-bq876" (OuterVolumeSpecName: "kube-api-access-bq876") pod "75257cd0-fb28-4880-aecb-467abbf8010d" (UID: "75257cd0-fb28-4880-aecb-467abbf8010d"). InnerVolumeSpecName "kube-api-access-bq876". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.937594 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "75257cd0-fb28-4880-aecb-467abbf8010d" (UID: "75257cd0-fb28-4880-aecb-467abbf8010d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.943499 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-scripts" (OuterVolumeSpecName: "scripts") pod "75257cd0-fb28-4880-aecb-467abbf8010d" (UID: "75257cd0-fb28-4880-aecb-467abbf8010d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:24 crc kubenswrapper[4784]: I0123 06:45:24.950224 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75257cd0-fb28-4880-aecb-467abbf8010d" (UID: "75257cd0-fb28-4880-aecb-467abbf8010d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.017841 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq876\" (UniqueName: \"kubernetes.io/projected/75257cd0-fb28-4880-aecb-467abbf8010d-kube-api-access-bq876\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.017890 4784 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75257cd0-fb28-4880-aecb-467abbf8010d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.017905 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.017915 4784 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.017928 4784 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75257cd0-fb28-4880-aecb-467abbf8010d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.017937 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.017945 4784 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.017958 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75257cd0-fb28-4880-aecb-467abbf8010d-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.029093 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="480d35f1-9e5d-4c9b-bdab-fd9531bf794a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.029870 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="480d35f1-9e5d-4c9b-bdab-fd9531bf794a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.072720 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5hq4v"] Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.075205 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.081119 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.081464 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.110231 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5hq4v"] Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.223994 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-scripts\") pod \"nova-cell1-cell-mapping-5hq4v\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.224092 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmm7l\" (UniqueName: \"kubernetes.io/projected/f775fdb3-12ca-4168-833d-2ae3a140ae7e-kube-api-access-fmm7l\") pod \"nova-cell1-cell-mapping-5hq4v\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.224544 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-config-data\") pod \"nova-cell1-cell-mapping-5hq4v\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.224678 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5hq4v\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.274440 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f0e8e43-7110-45ca-8be2-59c0150b3ac4" path="/var/lib/kubelet/pods/2f0e8e43-7110-45ca-8be2-59c0150b3ac4/volumes" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.277842 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e7692b8-7b60-4294-b6e5-7e2145383b4e" path="/var/lib/kubelet/pods/3e7692b8-7b60-4294-b6e5-7e2145383b4e/volumes" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.328771 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5hq4v\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.329099 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-scripts\") pod \"nova-cell1-cell-mapping-5hq4v\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.329160 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmm7l\" (UniqueName: \"kubernetes.io/projected/f775fdb3-12ca-4168-833d-2ae3a140ae7e-kube-api-access-fmm7l\") pod \"nova-cell1-cell-mapping-5hq4v\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.329234 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-config-data\") pod \"nova-cell1-cell-mapping-5hq4v\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.338195 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-scripts\") pod \"nova-cell1-cell-mapping-5hq4v\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.340840 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5hq4v\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.349048 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-config-data\") pod \"nova-cell1-cell-mapping-5hq4v\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.362209 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmm7l\" (UniqueName: \"kubernetes.io/projected/f775fdb3-12ca-4168-833d-2ae3a140ae7e-kube-api-access-fmm7l\") pod \"nova-cell1-cell-mapping-5hq4v\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.441823 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.485319 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.818885 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.819556 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff246545-b372-434e-b61e-51c674848d39","Type":"ContainerStarted","Data":"d49330c613f589d1ac3fc0077536983828612c780456403bae64eac21a2032d9"} Jan 23 06:45:25 crc kubenswrapper[4784]: I0123 06:45:25.974603 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.007767 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:26 crc kubenswrapper[4784]: W0123 06:45:26.051138 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf775fdb3_12ca_4168_833d_2ae3a140ae7e.slice/crio-cd640f7cd67938c52ef7ac902313bd0a91173a68cde4cd32638e8a382a135467 WatchSource:0}: Error finding container cd640f7cd67938c52ef7ac902313bd0a91173a68cde4cd32638e8a382a135467: Status 404 returned error can't find the container with id cd640f7cd67938c52ef7ac902313bd0a91173a68cde4cd32638e8a382a135467 Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.073004 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.145250 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.157222 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5hq4v"] Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.165128 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.165255 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.165552 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.182896 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.311992 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263b6093-4133-4159-b83a-32199b46fa5d-scripts\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.312499 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/263b6093-4133-4159-b83a-32199b46fa5d-run-httpd\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.312550 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/263b6093-4133-4159-b83a-32199b46fa5d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.312639 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9kdx\" (UniqueName: \"kubernetes.io/projected/263b6093-4133-4159-b83a-32199b46fa5d-kube-api-access-c9kdx\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.312670 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/263b6093-4133-4159-b83a-32199b46fa5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.312730 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263b6093-4133-4159-b83a-32199b46fa5d-config-data\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.312752 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/263b6093-4133-4159-b83a-32199b46fa5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.313005 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/263b6093-4133-4159-b83a-32199b46fa5d-log-httpd\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.415357 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263b6093-4133-4159-b83a-32199b46fa5d-scripts\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.415442 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/263b6093-4133-4159-b83a-32199b46fa5d-run-httpd\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.415504 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/263b6093-4133-4159-b83a-32199b46fa5d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.415557 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9kdx\" (UniqueName: \"kubernetes.io/projected/263b6093-4133-4159-b83a-32199b46fa5d-kube-api-access-c9kdx\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.415588 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/263b6093-4133-4159-b83a-32199b46fa5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.415657 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263b6093-4133-4159-b83a-32199b46fa5d-config-data\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.415692 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/263b6093-4133-4159-b83a-32199b46fa5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.415742 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/263b6093-4133-4159-b83a-32199b46fa5d-log-httpd\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.416434 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/263b6093-4133-4159-b83a-32199b46fa5d-log-httpd\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.416580 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/263b6093-4133-4159-b83a-32199b46fa5d-run-httpd\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.422865 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263b6093-4133-4159-b83a-32199b46fa5d-scripts\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.423450 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/263b6093-4133-4159-b83a-32199b46fa5d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.423726 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/263b6093-4133-4159-b83a-32199b46fa5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.425936 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263b6093-4133-4159-b83a-32199b46fa5d-config-data\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.436083 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/263b6093-4133-4159-b83a-32199b46fa5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.452329 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9kdx\" (UniqueName: \"kubernetes.io/projected/263b6093-4133-4159-b83a-32199b46fa5d-kube-api-access-c9kdx\") pod \"ceilometer-0\" (UID: \"263b6093-4133-4159-b83a-32199b46fa5d\") " pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.540573 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.853322 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff246545-b372-434e-b61e-51c674848d39","Type":"ContainerStarted","Data":"fe18bcb4f17dffbd75140497905e2963fcb12cfdb2026478ab66c76dce92c30a"} Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.854689 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff246545-b372-434e-b61e-51c674848d39","Type":"ContainerStarted","Data":"a2a51cc5674d0a9f19e2c903e0ea5cd6fc9fe7e6dda17c9d5311e9bbd1fc72a7"} Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.861120 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5hq4v" event={"ID":"f775fdb3-12ca-4168-833d-2ae3a140ae7e","Type":"ContainerStarted","Data":"04b65425da2f85022c86789a01498790868c6d97149dc8744c68abb012db3825"} Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.861209 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5hq4v" event={"ID":"f775fdb3-12ca-4168-833d-2ae3a140ae7e","Type":"ContainerStarted","Data":"cd640f7cd67938c52ef7ac902313bd0a91173a68cde4cd32638e8a382a135467"} Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.923738 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.923698908 podStartE2EDuration="2.923698908s" podCreationTimestamp="2026-01-23 06:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:45:26.905680084 +0000 UTC m=+1530.138188068" watchObservedRunningTime="2026-01-23 06:45:26.923698908 +0000 UTC m=+1530.156206892" Jan 23 06:45:26 crc kubenswrapper[4784]: I0123 06:45:26.937924 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5hq4v" podStartSLOduration=1.937894357 podStartE2EDuration="1.937894357s" podCreationTimestamp="2026-01-23 06:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:45:26.931564722 +0000 UTC m=+1530.164072706" watchObservedRunningTime="2026-01-23 06:45:26.937894357 +0000 UTC m=+1530.170402331" Jan 23 06:45:27 crc kubenswrapper[4784]: I0123 06:45:27.116191 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:45:27 crc kubenswrapper[4784]: I0123 06:45:27.270381 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75257cd0-fb28-4880-aecb-467abbf8010d" path="/var/lib/kubelet/pods/75257cd0-fb28-4880-aecb-467abbf8010d/volumes" Jan 23 06:45:27 crc kubenswrapper[4784]: I0123 06:45:27.271072 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:45:27 crc kubenswrapper[4784]: I0123 06:45:27.419649 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-khwgk"] Jan 23 06:45:27 crc kubenswrapper[4784]: I0123 06:45:27.420049 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-khwgk" podUID="b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f" containerName="dnsmasq-dns" containerID="cri-o://0638cf18446280008cc5bb8414a9b26ad74c1172183eaeb1bf6a52b9c0e85e65" gracePeriod=10 Jan 23 06:45:27 crc kubenswrapper[4784]: I0123 06:45:27.882748 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"263b6093-4133-4159-b83a-32199b46fa5d","Type":"ContainerStarted","Data":"0931b6793625a27f098cfa95ddc9a9e4ccc6716381869fe6e6dc97203605642e"} Jan 23 06:45:27 crc kubenswrapper[4784]: I0123 06:45:27.886553 4784 generic.go:334] "Generic (PLEG): container finished" podID="b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f" containerID="0638cf18446280008cc5bb8414a9b26ad74c1172183eaeb1bf6a52b9c0e85e65" exitCode=0 Jan 23 06:45:27 crc kubenswrapper[4784]: I0123 06:45:27.886630 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-khwgk" event={"ID":"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f","Type":"ContainerDied","Data":"0638cf18446280008cc5bb8414a9b26ad74c1172183eaeb1bf6a52b9c0e85e65"} Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.026477 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.114074 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-ovsdbserver-nb\") pod \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.114242 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-config\") pod \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.114318 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-dns-swift-storage-0\") pod \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.114380 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q7nx\" (UniqueName: \"kubernetes.io/projected/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-kube-api-access-8q7nx\") pod \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.114423 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-dns-svc\") pod \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.114483 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-ovsdbserver-sb\") pod \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\" (UID: \"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f\") " Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.125272 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-kube-api-access-8q7nx" (OuterVolumeSpecName: "kube-api-access-8q7nx") pod "b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f" (UID: "b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f"). InnerVolumeSpecName "kube-api-access-8q7nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.194149 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f" (UID: "b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.194876 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f" (UID: "b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.195058 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f" (UID: "b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.223463 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.225214 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.225318 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q7nx\" (UniqueName: \"kubernetes.io/projected/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-kube-api-access-8q7nx\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.225589 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.229357 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f" (UID: "b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.273736 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-config" (OuterVolumeSpecName: "config") pod "b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f" (UID: "b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.329651 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.330039 4784 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.904947 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"263b6093-4133-4159-b83a-32199b46fa5d","Type":"ContainerStarted","Data":"b32fc5b188b9c182a1519e7a319ffd0e2844c19457eea65e0751cd078b0e6c10"} Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.911945 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-khwgk" event={"ID":"b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f","Type":"ContainerDied","Data":"05a01f5847673092a7d281c01215f3e703682de54768e9321971097c527870d2"} Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.912034 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-khwgk" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.912044 4784 scope.go:117] "RemoveContainer" containerID="0638cf18446280008cc5bb8414a9b26ad74c1172183eaeb1bf6a52b9c0e85e65" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.952318 4784 scope.go:117] "RemoveContainer" containerID="a96176ead09888f7a36bbc745013577a5b1f91eb5881d5fb1421903eafd90a4c" Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.966314 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-khwgk"] Jan 23 06:45:28 crc kubenswrapper[4784]: I0123 06:45:28.979542 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-khwgk"] Jan 23 06:45:29 crc kubenswrapper[4784]: I0123 06:45:29.273882 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f" path="/var/lib/kubelet/pods/b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f/volumes" Jan 23 06:45:29 crc kubenswrapper[4784]: I0123 06:45:29.928049 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"263b6093-4133-4159-b83a-32199b46fa5d","Type":"ContainerStarted","Data":"cd76c870fe40847de09e69d902853b6ac7f531bbf3e6b40751980f83fccc41ae"} Jan 23 06:45:30 crc kubenswrapper[4784]: I0123 06:45:30.946509 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"263b6093-4133-4159-b83a-32199b46fa5d","Type":"ContainerStarted","Data":"5469336ea0234e8040863d644e47b6448091b2a6b7714e9dfd0e44deeb8fc021"} Jan 23 06:45:32 crc kubenswrapper[4784]: I0123 06:45:32.977924 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"263b6093-4133-4159-b83a-32199b46fa5d","Type":"ContainerStarted","Data":"8d02df09ff247d55769fdbf95ce9e06119d5015bf9bd590c6331dfece2e87098"} Jan 23 06:45:32 crc kubenswrapper[4784]: I0123 06:45:32.978979 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 06:45:33 crc kubenswrapper[4784]: I0123 06:45:33.016223 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.270600756 podStartE2EDuration="8.01619601s" podCreationTimestamp="2026-01-23 06:45:25 +0000 UTC" firstStartedPulling="2026-01-23 06:45:27.129434323 +0000 UTC m=+1530.361942297" lastFinishedPulling="2026-01-23 06:45:31.875029577 +0000 UTC m=+1535.107537551" observedRunningTime="2026-01-23 06:45:33.015486603 +0000 UTC m=+1536.247994597" watchObservedRunningTime="2026-01-23 06:45:33.01619601 +0000 UTC m=+1536.248703984" Jan 23 06:45:33 crc kubenswrapper[4784]: I0123 06:45:33.990868 4784 generic.go:334] "Generic (PLEG): container finished" podID="f775fdb3-12ca-4168-833d-2ae3a140ae7e" containerID="04b65425da2f85022c86789a01498790868c6d97149dc8744c68abb012db3825" exitCode=0 Jan 23 06:45:33 crc kubenswrapper[4784]: I0123 06:45:33.991012 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5hq4v" event={"ID":"f775fdb3-12ca-4168-833d-2ae3a140ae7e","Type":"ContainerDied","Data":"04b65425da2f85022c86789a01498790868c6d97149dc8744c68abb012db3825"} Jan 23 06:45:34 crc kubenswrapper[4784]: I0123 06:45:34.019817 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 06:45:34 crc kubenswrapper[4784]: I0123 06:45:34.028327 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 06:45:34 crc kubenswrapper[4784]: I0123 06:45:34.032253 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 06:45:34 crc kubenswrapper[4784]: I0123 06:45:34.723513 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:45:34 crc kubenswrapper[4784]: I0123 06:45:34.724110 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.016801 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.514706 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.562455 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-combined-ca-bundle\") pod \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.562624 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-scripts\") pod \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.562721 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmm7l\" (UniqueName: \"kubernetes.io/projected/f775fdb3-12ca-4168-833d-2ae3a140ae7e-kube-api-access-fmm7l\") pod \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.562933 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-config-data\") pod \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\" (UID: \"f775fdb3-12ca-4168-833d-2ae3a140ae7e\") " Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.572275 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-scripts" (OuterVolumeSpecName: "scripts") pod "f775fdb3-12ca-4168-833d-2ae3a140ae7e" (UID: "f775fdb3-12ca-4168-833d-2ae3a140ae7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.574070 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f775fdb3-12ca-4168-833d-2ae3a140ae7e-kube-api-access-fmm7l" (OuterVolumeSpecName: "kube-api-access-fmm7l") pod "f775fdb3-12ca-4168-833d-2ae3a140ae7e" (UID: "f775fdb3-12ca-4168-833d-2ae3a140ae7e"). InnerVolumeSpecName "kube-api-access-fmm7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.608641 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f775fdb3-12ca-4168-833d-2ae3a140ae7e" (UID: "f775fdb3-12ca-4168-833d-2ae3a140ae7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.628456 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-config-data" (OuterVolumeSpecName: "config-data") pod "f775fdb3-12ca-4168-833d-2ae3a140ae7e" (UID: "f775fdb3-12ca-4168-833d-2ae3a140ae7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.667658 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.667713 4784 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.667728 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmm7l\" (UniqueName: \"kubernetes.io/projected/f775fdb3-12ca-4168-833d-2ae3a140ae7e-kube-api-access-fmm7l\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.667743 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f775fdb3-12ca-4168-833d-2ae3a140ae7e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.745200 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff246545-b372-434e-b61e-51c674848d39" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.219:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 06:45:35 crc kubenswrapper[4784]: I0123 06:45:35.745242 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff246545-b372-434e-b61e-51c674848d39" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.219:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 06:45:36 crc kubenswrapper[4784]: I0123 06:45:36.021786 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5hq4v" event={"ID":"f775fdb3-12ca-4168-833d-2ae3a140ae7e","Type":"ContainerDied","Data":"cd640f7cd67938c52ef7ac902313bd0a91173a68cde4cd32638e8a382a135467"} Jan 23 06:45:36 crc kubenswrapper[4784]: I0123 06:45:36.022294 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd640f7cd67938c52ef7ac902313bd0a91173a68cde4cd32638e8a382a135467" Jan 23 06:45:36 crc kubenswrapper[4784]: I0123 06:45:36.021892 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5hq4v" Jan 23 06:45:36 crc kubenswrapper[4784]: I0123 06:45:36.224006 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:45:36 crc kubenswrapper[4784]: I0123 06:45:36.224337 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff246545-b372-434e-b61e-51c674848d39" containerName="nova-api-log" containerID="cri-o://fe18bcb4f17dffbd75140497905e2963fcb12cfdb2026478ab66c76dce92c30a" gracePeriod=30 Jan 23 06:45:36 crc kubenswrapper[4784]: I0123 06:45:36.225927 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff246545-b372-434e-b61e-51c674848d39" containerName="nova-api-api" containerID="cri-o://a2a51cc5674d0a9f19e2c903e0ea5cd6fc9fe7e6dda17c9d5311e9bbd1fc72a7" gracePeriod=30 Jan 23 06:45:36 crc kubenswrapper[4784]: I0123 06:45:36.266786 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:45:36 crc kubenswrapper[4784]: I0123 06:45:36.267193 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="363e2891-7d58-44f8-9404-6f62b57a87c8" containerName="nova-scheduler-scheduler" containerID="cri-o://933f6695e90ee42a3851694317be8b5def78d304fbfe7d0d6b0f20f910900f7e" gracePeriod=30 Jan 23 06:45:36 crc kubenswrapper[4784]: I0123 06:45:36.286118 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:45:37 crc kubenswrapper[4784]: I0123 06:45:37.037249 4784 generic.go:334] "Generic (PLEG): container finished" podID="ff246545-b372-434e-b61e-51c674848d39" containerID="fe18bcb4f17dffbd75140497905e2963fcb12cfdb2026478ab66c76dce92c30a" exitCode=143 Jan 23 06:45:37 crc kubenswrapper[4784]: I0123 06:45:37.037360 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff246545-b372-434e-b61e-51c674848d39","Type":"ContainerDied","Data":"fe18bcb4f17dffbd75140497905e2963fcb12cfdb2026478ab66c76dce92c30a"} Jan 23 06:45:38 crc kubenswrapper[4784]: I0123 06:45:38.048967 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="480d35f1-9e5d-4c9b-bdab-fd9531bf794a" containerName="nova-metadata-log" containerID="cri-o://8fc8cdc39d7210f398376f06a3569ff42b1b826325a0f5b52bed52dd13895968" gracePeriod=30 Jan 23 06:45:38 crc kubenswrapper[4784]: I0123 06:45:38.049075 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="480d35f1-9e5d-4c9b-bdab-fd9531bf794a" containerName="nova-metadata-metadata" containerID="cri-o://608de97c178807991db1e18a6baacbfc425d7344e764e0b3c94aaaf73716265a" gracePeriod=30 Jan 23 06:45:39 crc kubenswrapper[4784]: I0123 06:45:39.063776 4784 generic.go:334] "Generic (PLEG): container finished" podID="480d35f1-9e5d-4c9b-bdab-fd9531bf794a" containerID="8fc8cdc39d7210f398376f06a3569ff42b1b826325a0f5b52bed52dd13895968" exitCode=143 Jan 23 06:45:39 crc kubenswrapper[4784]: I0123 06:45:39.064034 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"480d35f1-9e5d-4c9b-bdab-fd9531bf794a","Type":"ContainerDied","Data":"8fc8cdc39d7210f398376f06a3569ff42b1b826325a0f5b52bed52dd13895968"} Jan 23 06:45:40 crc kubenswrapper[4784]: E0123 06:45:40.706829 4784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 933f6695e90ee42a3851694317be8b5def78d304fbfe7d0d6b0f20f910900f7e is running failed: container process not found" containerID="933f6695e90ee42a3851694317be8b5def78d304fbfe7d0d6b0f20f910900f7e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 06:45:40 crc kubenswrapper[4784]: E0123 06:45:40.707348 4784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 933f6695e90ee42a3851694317be8b5def78d304fbfe7d0d6b0f20f910900f7e is running failed: container process not found" containerID="933f6695e90ee42a3851694317be8b5def78d304fbfe7d0d6b0f20f910900f7e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 06:45:40 crc kubenswrapper[4784]: E0123 06:45:40.707995 4784 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 933f6695e90ee42a3851694317be8b5def78d304fbfe7d0d6b0f20f910900f7e is running failed: container process not found" containerID="933f6695e90ee42a3851694317be8b5def78d304fbfe7d0d6b0f20f910900f7e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 06:45:40 crc kubenswrapper[4784]: E0123 06:45:40.708111 4784 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 933f6695e90ee42a3851694317be8b5def78d304fbfe7d0d6b0f20f910900f7e is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="363e2891-7d58-44f8-9404-6f62b57a87c8" containerName="nova-scheduler-scheduler" Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.086217 4784 generic.go:334] "Generic (PLEG): container finished" podID="363e2891-7d58-44f8-9404-6f62b57a87c8" containerID="933f6695e90ee42a3851694317be8b5def78d304fbfe7d0d6b0f20f910900f7e" exitCode=0 Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.086329 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"363e2891-7d58-44f8-9404-6f62b57a87c8","Type":"ContainerDied","Data":"933f6695e90ee42a3851694317be8b5def78d304fbfe7d0d6b0f20f910900f7e"} Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.086837 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"363e2891-7d58-44f8-9404-6f62b57a87c8","Type":"ContainerDied","Data":"25c8bc56e12eec39dc553e606ce80a6c8656dc043ec23ed6c7f87bfd07ca099e"} Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.086855 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c8bc56e12eec39dc553e606ce80a6c8656dc043ec23ed6c7f87bfd07ca099e" Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.111778 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.196179 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="480d35f1-9e5d-4c9b-bdab-fd9531bf794a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": read tcp 10.217.0.2:37110->10.217.0.216:8775: read: connection reset by peer" Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.196452 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="480d35f1-9e5d-4c9b-bdab-fd9531bf794a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": read tcp 10.217.0.2:37098->10.217.0.216:8775: read: connection reset by peer" Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.230965 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvxzd\" (UniqueName: \"kubernetes.io/projected/363e2891-7d58-44f8-9404-6f62b57a87c8-kube-api-access-wvxzd\") pod \"363e2891-7d58-44f8-9404-6f62b57a87c8\" (UID: \"363e2891-7d58-44f8-9404-6f62b57a87c8\") " Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.231069 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363e2891-7d58-44f8-9404-6f62b57a87c8-config-data\") pod \"363e2891-7d58-44f8-9404-6f62b57a87c8\" (UID: \"363e2891-7d58-44f8-9404-6f62b57a87c8\") " Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.231201 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363e2891-7d58-44f8-9404-6f62b57a87c8-combined-ca-bundle\") pod \"363e2891-7d58-44f8-9404-6f62b57a87c8\" (UID: \"363e2891-7d58-44f8-9404-6f62b57a87c8\") " Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.239740 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/363e2891-7d58-44f8-9404-6f62b57a87c8-kube-api-access-wvxzd" (OuterVolumeSpecName: "kube-api-access-wvxzd") pod "363e2891-7d58-44f8-9404-6f62b57a87c8" (UID: "363e2891-7d58-44f8-9404-6f62b57a87c8"). InnerVolumeSpecName "kube-api-access-wvxzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.272125 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363e2891-7d58-44f8-9404-6f62b57a87c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "363e2891-7d58-44f8-9404-6f62b57a87c8" (UID: "363e2891-7d58-44f8-9404-6f62b57a87c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.279858 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363e2891-7d58-44f8-9404-6f62b57a87c8-config-data" (OuterVolumeSpecName: "config-data") pod "363e2891-7d58-44f8-9404-6f62b57a87c8" (UID: "363e2891-7d58-44f8-9404-6f62b57a87c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.335332 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363e2891-7d58-44f8-9404-6f62b57a87c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.335377 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvxzd\" (UniqueName: \"kubernetes.io/projected/363e2891-7d58-44f8-9404-6f62b57a87c8-kube-api-access-wvxzd\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:41 crc kubenswrapper[4784]: I0123 06:45:41.335392 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363e2891-7d58-44f8-9404-6f62b57a87c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.102659 4784 generic.go:334] "Generic (PLEG): container finished" podID="480d35f1-9e5d-4c9b-bdab-fd9531bf794a" containerID="608de97c178807991db1e18a6baacbfc425d7344e764e0b3c94aaaf73716265a" exitCode=0 Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.102743 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"480d35f1-9e5d-4c9b-bdab-fd9531bf794a","Type":"ContainerDied","Data":"608de97c178807991db1e18a6baacbfc425d7344e764e0b3c94aaaf73716265a"} Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.106701 4784 generic.go:334] "Generic (PLEG): container finished" podID="ff246545-b372-434e-b61e-51c674848d39" containerID="a2a51cc5674d0a9f19e2c903e0ea5cd6fc9fe7e6dda17c9d5311e9bbd1fc72a7" exitCode=0 Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.106811 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff246545-b372-434e-b61e-51c674848d39","Type":"ContainerDied","Data":"a2a51cc5674d0a9f19e2c903e0ea5cd6fc9fe7e6dda17c9d5311e9bbd1fc72a7"} Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.106841 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.169378 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.185179 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.214145 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:45:42 crc kubenswrapper[4784]: E0123 06:45:42.215088 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363e2891-7d58-44f8-9404-6f62b57a87c8" containerName="nova-scheduler-scheduler" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.215117 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="363e2891-7d58-44f8-9404-6f62b57a87c8" containerName="nova-scheduler-scheduler" Jan 23 06:45:42 crc kubenswrapper[4784]: E0123 06:45:42.215133 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f" containerName="dnsmasq-dns" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.215155 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f" containerName="dnsmasq-dns" Jan 23 06:45:42 crc kubenswrapper[4784]: E0123 06:45:42.215186 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f775fdb3-12ca-4168-833d-2ae3a140ae7e" containerName="nova-manage" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.215195 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f775fdb3-12ca-4168-833d-2ae3a140ae7e" containerName="nova-manage" Jan 23 06:45:42 crc kubenswrapper[4784]: E0123 06:45:42.215222 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f" containerName="init" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.215230 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f" containerName="init" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.216063 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f775fdb3-12ca-4168-833d-2ae3a140ae7e" containerName="nova-manage" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.216099 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9dc0cd2-fdae-4614-a1a5-5c9cee9b449f" containerName="dnsmasq-dns" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.216120 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="363e2891-7d58-44f8-9404-6f62b57a87c8" containerName="nova-scheduler-scheduler" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.218689 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.224166 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.224904 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.258726 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6-config-data\") pod \"nova-scheduler-0\" (UID: \"a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6\") " pod="openstack/nova-scheduler-0" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.258823 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plwpj\" (UniqueName: \"kubernetes.io/projected/a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6-kube-api-access-plwpj\") pod \"nova-scheduler-0\" (UID: \"a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6\") " pod="openstack/nova-scheduler-0" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.259213 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6\") " pod="openstack/nova-scheduler-0" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.318997 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.365131 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-config-data\") pod \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.365742 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-combined-ca-bundle\") pod \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.365921 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcgzf\" (UniqueName: \"kubernetes.io/projected/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-kube-api-access-fcgzf\") pod \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.366063 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-logs\") pod \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.366125 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-nova-metadata-tls-certs\") pod \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\" (UID: \"480d35f1-9e5d-4c9b-bdab-fd9531bf794a\") " Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.366667 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plwpj\" (UniqueName: \"kubernetes.io/projected/a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6-kube-api-access-plwpj\") pod \"nova-scheduler-0\" (UID: \"a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6\") " pod="openstack/nova-scheduler-0" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.366980 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6\") " pod="openstack/nova-scheduler-0" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.367062 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-logs" (OuterVolumeSpecName: "logs") pod "480d35f1-9e5d-4c9b-bdab-fd9531bf794a" (UID: "480d35f1-9e5d-4c9b-bdab-fd9531bf794a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.382936 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6-config-data\") pod \"nova-scheduler-0\" (UID: \"a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6\") " pod="openstack/nova-scheduler-0" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.383432 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.387893 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-kube-api-access-fcgzf" (OuterVolumeSpecName: "kube-api-access-fcgzf") pod "480d35f1-9e5d-4c9b-bdab-fd9531bf794a" (UID: "480d35f1-9e5d-4c9b-bdab-fd9531bf794a"). InnerVolumeSpecName "kube-api-access-fcgzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.392529 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6-config-data\") pod \"nova-scheduler-0\" (UID: \"a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6\") " pod="openstack/nova-scheduler-0" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.396865 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6\") " pod="openstack/nova-scheduler-0" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.414230 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plwpj\" (UniqueName: \"kubernetes.io/projected/a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6-kube-api-access-plwpj\") pod \"nova-scheduler-0\" (UID: \"a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6\") " pod="openstack/nova-scheduler-0" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.455088 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-config-data" (OuterVolumeSpecName: "config-data") pod "480d35f1-9e5d-4c9b-bdab-fd9531bf794a" (UID: "480d35f1-9e5d-4c9b-bdab-fd9531bf794a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.459078 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "480d35f1-9e5d-4c9b-bdab-fd9531bf794a" (UID: "480d35f1-9e5d-4c9b-bdab-fd9531bf794a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.490469 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcgzf\" (UniqueName: \"kubernetes.io/projected/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-kube-api-access-fcgzf\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.490527 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.490543 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.511541 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "480d35f1-9e5d-4c9b-bdab-fd9531bf794a" (UID: "480d35f1-9e5d-4c9b-bdab-fd9531bf794a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.592974 4784 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/480d35f1-9e5d-4c9b-bdab-fd9531bf794a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.595290 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.644712 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.694681 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-config-data\") pod \"ff246545-b372-434e-b61e-51c674848d39\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.694765 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff246545-b372-434e-b61e-51c674848d39-logs\") pod \"ff246545-b372-434e-b61e-51c674848d39\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.694937 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk8d4\" (UniqueName: \"kubernetes.io/projected/ff246545-b372-434e-b61e-51c674848d39-kube-api-access-fk8d4\") pod \"ff246545-b372-434e-b61e-51c674848d39\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.695076 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-combined-ca-bundle\") pod \"ff246545-b372-434e-b61e-51c674848d39\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.695121 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-internal-tls-certs\") pod \"ff246545-b372-434e-b61e-51c674848d39\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.695188 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-public-tls-certs\") pod \"ff246545-b372-434e-b61e-51c674848d39\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.695550 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff246545-b372-434e-b61e-51c674848d39-logs" (OuterVolumeSpecName: "logs") pod "ff246545-b372-434e-b61e-51c674848d39" (UID: "ff246545-b372-434e-b61e-51c674848d39"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.696023 4784 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff246545-b372-434e-b61e-51c674848d39-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.700263 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff246545-b372-434e-b61e-51c674848d39-kube-api-access-fk8d4" (OuterVolumeSpecName: "kube-api-access-fk8d4") pod "ff246545-b372-434e-b61e-51c674848d39" (UID: "ff246545-b372-434e-b61e-51c674848d39"). InnerVolumeSpecName "kube-api-access-fk8d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.728037 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff246545-b372-434e-b61e-51c674848d39" (UID: "ff246545-b372-434e-b61e-51c674848d39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.761995 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-config-data" (OuterVolumeSpecName: "config-data") pod "ff246545-b372-434e-b61e-51c674848d39" (UID: "ff246545-b372-434e-b61e-51c674848d39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.770900 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ff246545-b372-434e-b61e-51c674848d39" (UID: "ff246545-b372-434e-b61e-51c674848d39"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.797138 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ff246545-b372-434e-b61e-51c674848d39" (UID: "ff246545-b372-434e-b61e-51c674848d39"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.798319 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-public-tls-certs\") pod \"ff246545-b372-434e-b61e-51c674848d39\" (UID: \"ff246545-b372-434e-b61e-51c674848d39\") " Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.799411 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.799437 4784 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.799479 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.799493 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk8d4\" (UniqueName: \"kubernetes.io/projected/ff246545-b372-434e-b61e-51c674848d39-kube-api-access-fk8d4\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:42 crc kubenswrapper[4784]: W0123 06:45:42.799598 4784 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ff246545-b372-434e-b61e-51c674848d39/volumes/kubernetes.io~secret/public-tls-certs Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.799619 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ff246545-b372-434e-b61e-51c674848d39" (UID: "ff246545-b372-434e-b61e-51c674848d39"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:42 crc kubenswrapper[4784]: I0123 06:45:42.902256 4784 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff246545-b372-434e-b61e-51c674848d39-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.125536 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"480d35f1-9e5d-4c9b-bdab-fd9531bf794a","Type":"ContainerDied","Data":"1bf1e6b84c39e42cb4836cf55debb3edc31794da2034c7fe78f3ff4e7fe87b8f"} Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.125655 4784 scope.go:117] "RemoveContainer" containerID="608de97c178807991db1e18a6baacbfc425d7344e764e0b3c94aaaf73716265a" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.125561 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.135806 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff246545-b372-434e-b61e-51c674848d39","Type":"ContainerDied","Data":"d49330c613f589d1ac3fc0077536983828612c780456403bae64eac21a2032d9"} Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.135910 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.169829 4784 scope.go:117] "RemoveContainer" containerID="8fc8cdc39d7210f398376f06a3569ff42b1b826325a0f5b52bed52dd13895968" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.184554 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.207020 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.235374 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.271377 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="363e2891-7d58-44f8-9404-6f62b57a87c8" path="/var/lib/kubelet/pods/363e2891-7d58-44f8-9404-6f62b57a87c8/volumes" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.272283 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="480d35f1-9e5d-4c9b-bdab-fd9531bf794a" path="/var/lib/kubelet/pods/480d35f1-9e5d-4c9b-bdab-fd9531bf794a/volumes" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.273544 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.298583 4784 scope.go:117] "RemoveContainer" containerID="a2a51cc5674d0a9f19e2c903e0ea5cd6fc9fe7e6dda17c9d5311e9bbd1fc72a7" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.307901 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:45:43 crc kubenswrapper[4784]: E0123 06:45:43.308678 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480d35f1-9e5d-4c9b-bdab-fd9531bf794a" containerName="nova-metadata-metadata" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.308703 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="480d35f1-9e5d-4c9b-bdab-fd9531bf794a" containerName="nova-metadata-metadata" Jan 23 06:45:43 crc kubenswrapper[4784]: E0123 06:45:43.308716 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff246545-b372-434e-b61e-51c674848d39" containerName="nova-api-log" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.308723 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff246545-b372-434e-b61e-51c674848d39" containerName="nova-api-log" Jan 23 06:45:43 crc kubenswrapper[4784]: E0123 06:45:43.308765 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff246545-b372-434e-b61e-51c674848d39" containerName="nova-api-api" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.308773 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff246545-b372-434e-b61e-51c674848d39" containerName="nova-api-api" Jan 23 06:45:43 crc kubenswrapper[4784]: E0123 06:45:43.308789 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480d35f1-9e5d-4c9b-bdab-fd9531bf794a" containerName="nova-metadata-log" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.308795 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="480d35f1-9e5d-4c9b-bdab-fd9531bf794a" containerName="nova-metadata-log" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.309076 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff246545-b372-434e-b61e-51c674848d39" containerName="nova-api-log" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.309108 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="480d35f1-9e5d-4c9b-bdab-fd9531bf794a" containerName="nova-metadata-log" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.309125 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff246545-b372-434e-b61e-51c674848d39" containerName="nova-api-api" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.309140 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="480d35f1-9e5d-4c9b-bdab-fd9531bf794a" containerName="nova-metadata-metadata" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.310701 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.317392 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.317878 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 06:45:43 crc kubenswrapper[4784]: W0123 06:45:43.319475 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda834a7c0_2d61_4cae_a9aa_b9d79f2d92e6.slice/crio-4eee3c03b31034bb1296fbc876102e7f395f7487846e73367bb3cb1fc0c4e212 WatchSource:0}: Error finding container 4eee3c03b31034bb1296fbc876102e7f395f7487846e73367bb3cb1fc0c4e212: Status 404 returned error can't find the container with id 4eee3c03b31034bb1296fbc876102e7f395f7487846e73367bb3cb1fc0c4e212 Jan 23 06:45:43 crc kubenswrapper[4784]: E0123 06:45:43.327094 4784 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff246545_b372_434e_b61e_51c674848d39.slice/crio-d49330c613f589d1ac3fc0077536983828612c780456403bae64eac21a2032d9\": RecentStats: unable to find data in memory cache]" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.342874 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.363601 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.387865 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.390072 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.393644 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.393801 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.394066 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.401345 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.421985 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssw5l\" (UniqueName: \"kubernetes.io/projected/a52c9612-2f18-438f-aacb-5f9ec3c24082-kube-api-access-ssw5l\") pod \"nova-metadata-0\" (UID: \"a52c9612-2f18-438f-aacb-5f9ec3c24082\") " pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.422110 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7b402f-9e10-4056-9911-be0cbb5fab92-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.422151 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52c9612-2f18-438f-aacb-5f9ec3c24082-config-data\") pod \"nova-metadata-0\" (UID: \"a52c9612-2f18-438f-aacb-5f9ec3c24082\") " pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.422643 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7b402f-9e10-4056-9911-be0cbb5fab92-config-data\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.422810 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52c9612-2f18-438f-aacb-5f9ec3c24082-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a52c9612-2f18-438f-aacb-5f9ec3c24082\") " pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.422847 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd7b402f-9e10-4056-9911-be0cbb5fab92-logs\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.422974 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff5bd\" (UniqueName: \"kubernetes.io/projected/fd7b402f-9e10-4056-9911-be0cbb5fab92-kube-api-access-ff5bd\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.423227 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd7b402f-9e10-4056-9911-be0cbb5fab92-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.424138 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52c9612-2f18-438f-aacb-5f9ec3c24082-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a52c9612-2f18-438f-aacb-5f9ec3c24082\") " pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.424187 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd7b402f-9e10-4056-9911-be0cbb5fab92-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.424260 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a52c9612-2f18-438f-aacb-5f9ec3c24082-logs\") pod \"nova-metadata-0\" (UID: \"a52c9612-2f18-438f-aacb-5f9ec3c24082\") " pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.469539 4784 scope.go:117] "RemoveContainer" containerID="fe18bcb4f17dffbd75140497905e2963fcb12cfdb2026478ab66c76dce92c30a" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.527514 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd7b402f-9e10-4056-9911-be0cbb5fab92-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.527631 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52c9612-2f18-438f-aacb-5f9ec3c24082-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a52c9612-2f18-438f-aacb-5f9ec3c24082\") " pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.527663 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd7b402f-9e10-4056-9911-be0cbb5fab92-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.527710 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a52c9612-2f18-438f-aacb-5f9ec3c24082-logs\") pod \"nova-metadata-0\" (UID: \"a52c9612-2f18-438f-aacb-5f9ec3c24082\") " pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.527792 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssw5l\" (UniqueName: \"kubernetes.io/projected/a52c9612-2f18-438f-aacb-5f9ec3c24082-kube-api-access-ssw5l\") pod \"nova-metadata-0\" (UID: \"a52c9612-2f18-438f-aacb-5f9ec3c24082\") " pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.527870 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7b402f-9e10-4056-9911-be0cbb5fab92-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.527904 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52c9612-2f18-438f-aacb-5f9ec3c24082-config-data\") pod \"nova-metadata-0\" (UID: \"a52c9612-2f18-438f-aacb-5f9ec3c24082\") " pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.528004 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7b402f-9e10-4056-9911-be0cbb5fab92-config-data\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.528049 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52c9612-2f18-438f-aacb-5f9ec3c24082-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a52c9612-2f18-438f-aacb-5f9ec3c24082\") " pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.528074 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd7b402f-9e10-4056-9911-be0cbb5fab92-logs\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.528116 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff5bd\" (UniqueName: \"kubernetes.io/projected/fd7b402f-9e10-4056-9911-be0cbb5fab92-kube-api-access-ff5bd\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.529257 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a52c9612-2f18-438f-aacb-5f9ec3c24082-logs\") pod \"nova-metadata-0\" (UID: \"a52c9612-2f18-438f-aacb-5f9ec3c24082\") " pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.529300 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd7b402f-9e10-4056-9911-be0cbb5fab92-logs\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.532853 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a52c9612-2f18-438f-aacb-5f9ec3c24082-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a52c9612-2f18-438f-aacb-5f9ec3c24082\") " pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.533700 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7b402f-9e10-4056-9911-be0cbb5fab92-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.536840 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd7b402f-9e10-4056-9911-be0cbb5fab92-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.537198 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd7b402f-9e10-4056-9911-be0cbb5fab92-config-data\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.537841 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52c9612-2f18-438f-aacb-5f9ec3c24082-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a52c9612-2f18-438f-aacb-5f9ec3c24082\") " pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.538516 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd7b402f-9e10-4056-9911-be0cbb5fab92-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.546060 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52c9612-2f18-438f-aacb-5f9ec3c24082-config-data\") pod \"nova-metadata-0\" (UID: \"a52c9612-2f18-438f-aacb-5f9ec3c24082\") " pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.556725 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff5bd\" (UniqueName: \"kubernetes.io/projected/fd7b402f-9e10-4056-9911-be0cbb5fab92-kube-api-access-ff5bd\") pod \"nova-api-0\" (UID: \"fd7b402f-9e10-4056-9911-be0cbb5fab92\") " pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.560441 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssw5l\" (UniqueName: \"kubernetes.io/projected/a52c9612-2f18-438f-aacb-5f9ec3c24082-kube-api-access-ssw5l\") pod \"nova-metadata-0\" (UID: \"a52c9612-2f18-438f-aacb-5f9ec3c24082\") " pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.676468 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7fzqm"] Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.680114 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.708640 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7fzqm"] Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.737206 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-utilities\") pod \"certified-operators-7fzqm\" (UID: \"cd4440bf-fb03-421e-8156-49d3c1a7cc6c\") " pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.738217 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-catalog-content\") pod \"certified-operators-7fzqm\" (UID: \"cd4440bf-fb03-421e-8156-49d3c1a7cc6c\") " pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.739302 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p86v\" (UniqueName: \"kubernetes.io/projected/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-kube-api-access-6p86v\") pod \"certified-operators-7fzqm\" (UID: \"cd4440bf-fb03-421e-8156-49d3c1a7cc6c\") " pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.753788 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.763074 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.842351 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-catalog-content\") pod \"certified-operators-7fzqm\" (UID: \"cd4440bf-fb03-421e-8156-49d3c1a7cc6c\") " pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.842492 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p86v\" (UniqueName: \"kubernetes.io/projected/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-kube-api-access-6p86v\") pod \"certified-operators-7fzqm\" (UID: \"cd4440bf-fb03-421e-8156-49d3c1a7cc6c\") " pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.842568 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-utilities\") pod \"certified-operators-7fzqm\" (UID: \"cd4440bf-fb03-421e-8156-49d3c1a7cc6c\") " pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.843449 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-utilities\") pod \"certified-operators-7fzqm\" (UID: \"cd4440bf-fb03-421e-8156-49d3c1a7cc6c\") " pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.843819 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-catalog-content\") pod \"certified-operators-7fzqm\" (UID: \"cd4440bf-fb03-421e-8156-49d3c1a7cc6c\") " pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:45:43 crc kubenswrapper[4784]: I0123 06:45:43.873740 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p86v\" (UniqueName: \"kubernetes.io/projected/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-kube-api-access-6p86v\") pod \"certified-operators-7fzqm\" (UID: \"cd4440bf-fb03-421e-8156-49d3c1a7cc6c\") " pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:45:44 crc kubenswrapper[4784]: I0123 06:45:44.012464 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:45:44 crc kubenswrapper[4784]: I0123 06:45:44.190055 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6","Type":"ContainerStarted","Data":"4eee3c03b31034bb1296fbc876102e7f395f7487846e73367bb3cb1fc0c4e212"} Jan 23 06:45:44 crc kubenswrapper[4784]: I0123 06:45:44.348961 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:45:44 crc kubenswrapper[4784]: I0123 06:45:44.459447 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:45:44 crc kubenswrapper[4784]: I0123 06:45:44.646641 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7fzqm"] Jan 23 06:45:45 crc kubenswrapper[4784]: I0123 06:45:45.210069 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6","Type":"ContainerStarted","Data":"8b86fc69da695a5822486daffaf038c6fd60c609a44fadba1fd1f988c8e8d358"} Jan 23 06:45:45 crc kubenswrapper[4784]: I0123 06:45:45.214435 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fzqm" event={"ID":"cd4440bf-fb03-421e-8156-49d3c1a7cc6c","Type":"ContainerStarted","Data":"7abda1df6ccedd41bdb0e741aeba758b10332b5f67d80b742ee557813b73665e"} Jan 23 06:45:45 crc kubenswrapper[4784]: I0123 06:45:45.214487 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fzqm" event={"ID":"cd4440bf-fb03-421e-8156-49d3c1a7cc6c","Type":"ContainerStarted","Data":"716bbdb3f1961bc90ce69dd4f5ca3dc67a156ae024a1c74b8196b229f3e0152a"} Jan 23 06:45:45 crc kubenswrapper[4784]: I0123 06:45:45.223935 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a52c9612-2f18-438f-aacb-5f9ec3c24082","Type":"ContainerStarted","Data":"f80dcabc73d86cebaadb10309f3c00a0f40e3852465230529e26e67b3f97330b"} Jan 23 06:45:45 crc kubenswrapper[4784]: I0123 06:45:45.224549 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a52c9612-2f18-438f-aacb-5f9ec3c24082","Type":"ContainerStarted","Data":"a045a14fd6a19589dae40713115f1e18e3cfde94d335fdb0d52822e54503be3f"} Jan 23 06:45:45 crc kubenswrapper[4784]: I0123 06:45:45.227669 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd7b402f-9e10-4056-9911-be0cbb5fab92","Type":"ContainerStarted","Data":"205d45ea4a3bfd9ffa97ab212c8536996b35f8c9acc645a3bc0db72454a06bac"} Jan 23 06:45:45 crc kubenswrapper[4784]: I0123 06:45:45.227733 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd7b402f-9e10-4056-9911-be0cbb5fab92","Type":"ContainerStarted","Data":"2019de06cd4012c403ee483dc0cdc5f8d3de992bf89f570370e54a3b3766e63a"} Jan 23 06:45:45 crc kubenswrapper[4784]: I0123 06:45:45.233700 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.233684765 podStartE2EDuration="3.233684765s" podCreationTimestamp="2026-01-23 06:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:45:45.229248206 +0000 UTC m=+1548.461756180" watchObservedRunningTime="2026-01-23 06:45:45.233684765 +0000 UTC m=+1548.466192739" Jan 23 06:45:45 crc kubenswrapper[4784]: I0123 06:45:45.270046 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff246545-b372-434e-b61e-51c674848d39" path="/var/lib/kubelet/pods/ff246545-b372-434e-b61e-51c674848d39/volumes" Jan 23 06:45:46 crc kubenswrapper[4784]: I0123 06:45:46.246970 4784 generic.go:334] "Generic (PLEG): container finished" podID="cd4440bf-fb03-421e-8156-49d3c1a7cc6c" containerID="7abda1df6ccedd41bdb0e741aeba758b10332b5f67d80b742ee557813b73665e" exitCode=0 Jan 23 06:45:46 crc kubenswrapper[4784]: I0123 06:45:46.247070 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fzqm" event={"ID":"cd4440bf-fb03-421e-8156-49d3c1a7cc6c","Type":"ContainerDied","Data":"7abda1df6ccedd41bdb0e741aeba758b10332b5f67d80b742ee557813b73665e"} Jan 23 06:45:47 crc kubenswrapper[4784]: I0123 06:45:47.646623 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 06:45:52 crc kubenswrapper[4784]: I0123 06:45:52.349929 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a52c9612-2f18-438f-aacb-5f9ec3c24082","Type":"ContainerStarted","Data":"3304b8976ae1f01956b31a2a4aeda4b9245ec9d88f5027511b2026043a929f8f"} Jan 23 06:45:52 crc kubenswrapper[4784]: I0123 06:45:52.354466 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd7b402f-9e10-4056-9911-be0cbb5fab92","Type":"ContainerStarted","Data":"d894532c9ebdf32d738ec814985d4e537eba7b22915cad5bb0bd50b8cad63f46"} Jan 23 06:45:52 crc kubenswrapper[4784]: I0123 06:45:52.387324 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=9.38729064 podStartE2EDuration="9.38729064s" podCreationTimestamp="2026-01-23 06:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:45:52.372468135 +0000 UTC m=+1555.604976149" watchObservedRunningTime="2026-01-23 06:45:52.38729064 +0000 UTC m=+1555.619798624" Jan 23 06:45:52 crc kubenswrapper[4784]: I0123 06:45:52.413185 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=9.413145646 podStartE2EDuration="9.413145646s" podCreationTimestamp="2026-01-23 06:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:45:52.400473454 +0000 UTC m=+1555.632981448" watchObservedRunningTime="2026-01-23 06:45:52.413145646 +0000 UTC m=+1555.645653620" Jan 23 06:45:52 crc kubenswrapper[4784]: I0123 06:45:52.646452 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 06:45:52 crc kubenswrapper[4784]: I0123 06:45:52.696064 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 06:45:53 crc kubenswrapper[4784]: I0123 06:45:53.369355 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fzqm" event={"ID":"cd4440bf-fb03-421e-8156-49d3c1a7cc6c","Type":"ContainerStarted","Data":"42b840a9c0794448ce4e550ef3a6d14740d017cc59ce9e5a29164e2778208066"} Jan 23 06:45:53 crc kubenswrapper[4784]: I0123 06:45:53.407975 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 06:45:53 crc kubenswrapper[4784]: I0123 06:45:53.603367 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:45:53 crc kubenswrapper[4784]: I0123 06:45:53.603449 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:45:53 crc kubenswrapper[4784]: I0123 06:45:53.754658 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 06:45:53 crc kubenswrapper[4784]: I0123 06:45:53.754735 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 06:45:53 crc kubenswrapper[4784]: I0123 06:45:53.754797 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 06:45:53 crc kubenswrapper[4784]: I0123 06:45:53.754813 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 06:45:53 crc kubenswrapper[4784]: I0123 06:45:53.765141 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:45:53 crc kubenswrapper[4784]: I0123 06:45:53.765211 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:45:54 crc kubenswrapper[4784]: I0123 06:45:54.770166 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a52c9612-2f18-438f-aacb-5f9ec3c24082" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:45:54 crc kubenswrapper[4784]: I0123 06:45:54.770286 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a52c9612-2f18-438f-aacb-5f9ec3c24082" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 06:45:54 crc kubenswrapper[4784]: I0123 06:45:54.785052 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd7b402f-9e10-4056-9911-be0cbb5fab92" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 06:45:54 crc kubenswrapper[4784]: I0123 06:45:54.785075 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd7b402f-9e10-4056-9911-be0cbb5fab92" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 06:45:56 crc kubenswrapper[4784]: I0123 06:45:56.895274 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 06:45:58 crc kubenswrapper[4784]: I0123 06:45:58.435863 4784 generic.go:334] "Generic (PLEG): container finished" podID="cd4440bf-fb03-421e-8156-49d3c1a7cc6c" containerID="42b840a9c0794448ce4e550ef3a6d14740d017cc59ce9e5a29164e2778208066" exitCode=0 Jan 23 06:45:58 crc kubenswrapper[4784]: I0123 06:45:58.435898 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fzqm" event={"ID":"cd4440bf-fb03-421e-8156-49d3c1a7cc6c","Type":"ContainerDied","Data":"42b840a9c0794448ce4e550ef3a6d14740d017cc59ce9e5a29164e2778208066"} Jan 23 06:46:00 crc kubenswrapper[4784]: I0123 06:46:00.463322 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fzqm" event={"ID":"cd4440bf-fb03-421e-8156-49d3c1a7cc6c","Type":"ContainerStarted","Data":"121e160cd4810459c992ebc506b7448fa5b43a99c32a21640093d89a85d03b23"} Jan 23 06:46:00 crc kubenswrapper[4784]: I0123 06:46:00.484705 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7fzqm" podStartSLOduration=4.654717955 podStartE2EDuration="17.484666627s" podCreationTimestamp="2026-01-23 06:45:43 +0000 UTC" firstStartedPulling="2026-01-23 06:45:46.249872221 +0000 UTC m=+1549.482380195" lastFinishedPulling="2026-01-23 06:45:59.079820893 +0000 UTC m=+1562.312328867" observedRunningTime="2026-01-23 06:46:00.48150855 +0000 UTC m=+1563.714016524" watchObservedRunningTime="2026-01-23 06:46:00.484666627 +0000 UTC m=+1563.717174601" Jan 23 06:46:03 crc kubenswrapper[4784]: I0123 06:46:03.761314 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 06:46:03 crc kubenswrapper[4784]: I0123 06:46:03.763079 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 06:46:03 crc kubenswrapper[4784]: I0123 06:46:03.776192 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 06:46:03 crc kubenswrapper[4784]: I0123 06:46:03.793866 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 06:46:03 crc kubenswrapper[4784]: I0123 06:46:03.796137 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 06:46:03 crc kubenswrapper[4784]: I0123 06:46:03.803597 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 06:46:03 crc kubenswrapper[4784]: I0123 06:46:03.807489 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 06:46:04 crc kubenswrapper[4784]: I0123 06:46:04.014025 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:46:04 crc kubenswrapper[4784]: I0123 06:46:04.014550 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:46:04 crc kubenswrapper[4784]: I0123 06:46:04.070525 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:46:04 crc kubenswrapper[4784]: I0123 06:46:04.515692 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 06:46:04 crc kubenswrapper[4784]: I0123 06:46:04.523685 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 06:46:04 crc kubenswrapper[4784]: I0123 06:46:04.523821 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 06:46:04 crc kubenswrapper[4784]: I0123 06:46:04.621316 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:46:04 crc kubenswrapper[4784]: I0123 06:46:04.708354 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7fzqm"] Jan 23 06:46:06 crc kubenswrapper[4784]: I0123 06:46:06.535531 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7fzqm" podUID="cd4440bf-fb03-421e-8156-49d3c1a7cc6c" containerName="registry-server" containerID="cri-o://121e160cd4810459c992ebc506b7448fa5b43a99c32a21640093d89a85d03b23" gracePeriod=2 Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.124162 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.248350 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-catalog-content\") pod \"cd4440bf-fb03-421e-8156-49d3c1a7cc6c\" (UID: \"cd4440bf-fb03-421e-8156-49d3c1a7cc6c\") " Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.248432 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p86v\" (UniqueName: \"kubernetes.io/projected/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-kube-api-access-6p86v\") pod \"cd4440bf-fb03-421e-8156-49d3c1a7cc6c\" (UID: \"cd4440bf-fb03-421e-8156-49d3c1a7cc6c\") " Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.248548 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-utilities\") pod \"cd4440bf-fb03-421e-8156-49d3c1a7cc6c\" (UID: \"cd4440bf-fb03-421e-8156-49d3c1a7cc6c\") " Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.249845 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-utilities" (OuterVolumeSpecName: "utilities") pod "cd4440bf-fb03-421e-8156-49d3c1a7cc6c" (UID: "cd4440bf-fb03-421e-8156-49d3c1a7cc6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.265131 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-kube-api-access-6p86v" (OuterVolumeSpecName: "kube-api-access-6p86v") pod "cd4440bf-fb03-421e-8156-49d3c1a7cc6c" (UID: "cd4440bf-fb03-421e-8156-49d3c1a7cc6c"). InnerVolumeSpecName "kube-api-access-6p86v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.325004 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd4440bf-fb03-421e-8156-49d3c1a7cc6c" (UID: "cd4440bf-fb03-421e-8156-49d3c1a7cc6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.352417 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.352458 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.352495 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p86v\" (UniqueName: \"kubernetes.io/projected/cd4440bf-fb03-421e-8156-49d3c1a7cc6c-kube-api-access-6p86v\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.552064 4784 generic.go:334] "Generic (PLEG): container finished" podID="cd4440bf-fb03-421e-8156-49d3c1a7cc6c" containerID="121e160cd4810459c992ebc506b7448fa5b43a99c32a21640093d89a85d03b23" exitCode=0 Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.552130 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fzqm" event={"ID":"cd4440bf-fb03-421e-8156-49d3c1a7cc6c","Type":"ContainerDied","Data":"121e160cd4810459c992ebc506b7448fa5b43a99c32a21640093d89a85d03b23"} Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.552174 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7fzqm" event={"ID":"cd4440bf-fb03-421e-8156-49d3c1a7cc6c","Type":"ContainerDied","Data":"716bbdb3f1961bc90ce69dd4f5ca3dc67a156ae024a1c74b8196b229f3e0152a"} Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.552199 4784 scope.go:117] "RemoveContainer" containerID="121e160cd4810459c992ebc506b7448fa5b43a99c32a21640093d89a85d03b23" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.552401 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7fzqm" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.599281 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7fzqm"] Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.603331 4784 scope.go:117] "RemoveContainer" containerID="42b840a9c0794448ce4e550ef3a6d14740d017cc59ce9e5a29164e2778208066" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.612729 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7fzqm"] Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.633615 4784 scope.go:117] "RemoveContainer" containerID="7abda1df6ccedd41bdb0e741aeba758b10332b5f67d80b742ee557813b73665e" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.686298 4784 scope.go:117] "RemoveContainer" containerID="121e160cd4810459c992ebc506b7448fa5b43a99c32a21640093d89a85d03b23" Jan 23 06:46:07 crc kubenswrapper[4784]: E0123 06:46:07.686994 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"121e160cd4810459c992ebc506b7448fa5b43a99c32a21640093d89a85d03b23\": container with ID starting with 121e160cd4810459c992ebc506b7448fa5b43a99c32a21640093d89a85d03b23 not found: ID does not exist" containerID="121e160cd4810459c992ebc506b7448fa5b43a99c32a21640093d89a85d03b23" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.687033 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"121e160cd4810459c992ebc506b7448fa5b43a99c32a21640093d89a85d03b23"} err="failed to get container status \"121e160cd4810459c992ebc506b7448fa5b43a99c32a21640093d89a85d03b23\": rpc error: code = NotFound desc = could not find container \"121e160cd4810459c992ebc506b7448fa5b43a99c32a21640093d89a85d03b23\": container with ID starting with 121e160cd4810459c992ebc506b7448fa5b43a99c32a21640093d89a85d03b23 not found: ID does not exist" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.687372 4784 scope.go:117] "RemoveContainer" containerID="42b840a9c0794448ce4e550ef3a6d14740d017cc59ce9e5a29164e2778208066" Jan 23 06:46:07 crc kubenswrapper[4784]: E0123 06:46:07.687805 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42b840a9c0794448ce4e550ef3a6d14740d017cc59ce9e5a29164e2778208066\": container with ID starting with 42b840a9c0794448ce4e550ef3a6d14740d017cc59ce9e5a29164e2778208066 not found: ID does not exist" containerID="42b840a9c0794448ce4e550ef3a6d14740d017cc59ce9e5a29164e2778208066" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.687834 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b840a9c0794448ce4e550ef3a6d14740d017cc59ce9e5a29164e2778208066"} err="failed to get container status \"42b840a9c0794448ce4e550ef3a6d14740d017cc59ce9e5a29164e2778208066\": rpc error: code = NotFound desc = could not find container \"42b840a9c0794448ce4e550ef3a6d14740d017cc59ce9e5a29164e2778208066\": container with ID starting with 42b840a9c0794448ce4e550ef3a6d14740d017cc59ce9e5a29164e2778208066 not found: ID does not exist" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.687858 4784 scope.go:117] "RemoveContainer" containerID="7abda1df6ccedd41bdb0e741aeba758b10332b5f67d80b742ee557813b73665e" Jan 23 06:46:07 crc kubenswrapper[4784]: E0123 06:46:07.688228 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7abda1df6ccedd41bdb0e741aeba758b10332b5f67d80b742ee557813b73665e\": container with ID starting with 7abda1df6ccedd41bdb0e741aeba758b10332b5f67d80b742ee557813b73665e not found: ID does not exist" containerID="7abda1df6ccedd41bdb0e741aeba758b10332b5f67d80b742ee557813b73665e" Jan 23 06:46:07 crc kubenswrapper[4784]: I0123 06:46:07.688257 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7abda1df6ccedd41bdb0e741aeba758b10332b5f67d80b742ee557813b73665e"} err="failed to get container status \"7abda1df6ccedd41bdb0e741aeba758b10332b5f67d80b742ee557813b73665e\": rpc error: code = NotFound desc = could not find container \"7abda1df6ccedd41bdb0e741aeba758b10332b5f67d80b742ee557813b73665e\": container with ID starting with 7abda1df6ccedd41bdb0e741aeba758b10332b5f67d80b742ee557813b73665e not found: ID does not exist" Jan 23 06:46:09 crc kubenswrapper[4784]: I0123 06:46:09.271524 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd4440bf-fb03-421e-8156-49d3c1a7cc6c" path="/var/lib/kubelet/pods/cd4440bf-fb03-421e-8156-49d3c1a7cc6c/volumes" Jan 23 06:46:14 crc kubenswrapper[4784]: I0123 06:46:14.829303 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:46:16 crc kubenswrapper[4784]: I0123 06:46:16.383951 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:46:20 crc kubenswrapper[4784]: I0123 06:46:20.308057 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9e79eab6-cf02-4c69-99bd-2f3512c809f3" containerName="rabbitmq" containerID="cri-o://fbbccf065bf4ffbd21909156a14260a505558fbf2525c2e87f43df99e4ee0a5d" gracePeriod=604795 Jan 23 06:46:21 crc kubenswrapper[4784]: I0123 06:46:21.591843 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9e37da8a-e964-4f8b-aacc-2937130e2e7b" containerName="rabbitmq" containerID="cri-o://bab1e179a3cb60088fb59145c12918de242c44f03ae18ef36400199e95e6c870" gracePeriod=604795 Jan 23 06:46:23 crc kubenswrapper[4784]: I0123 06:46:23.603413 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:46:23 crc kubenswrapper[4784]: I0123 06:46:23.603961 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:46:28 crc kubenswrapper[4784]: I0123 06:46:28.334502 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="9e79eab6-cf02-4c69-99bd-2f3512c809f3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Jan 23 06:46:28 crc kubenswrapper[4784]: I0123 06:46:28.794502 4784 generic.go:334] "Generic (PLEG): container finished" podID="9e79eab6-cf02-4c69-99bd-2f3512c809f3" containerID="fbbccf065bf4ffbd21909156a14260a505558fbf2525c2e87f43df99e4ee0a5d" exitCode=0 Jan 23 06:46:28 crc kubenswrapper[4784]: I0123 06:46:28.794584 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e79eab6-cf02-4c69-99bd-2f3512c809f3","Type":"ContainerDied","Data":"fbbccf065bf4ffbd21909156a14260a505558fbf2525c2e87f43df99e4ee0a5d"} Jan 23 06:46:28 crc kubenswrapper[4784]: I0123 06:46:28.845973 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9e37da8a-e964-4f8b-aacc-2937130e2e7b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.489947 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.603833 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-confd\") pod \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.604561 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkqgh\" (UniqueName: \"kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-kube-api-access-hkqgh\") pod \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.604605 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-tls\") pod \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.604687 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-plugins\") pod \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.604813 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e79eab6-cf02-4c69-99bd-2f3512c809f3-erlang-cookie-secret\") pod \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.604863 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e79eab6-cf02-4c69-99bd-2f3512c809f3-pod-info\") pod \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.604914 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-plugins-conf\") pod \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.605099 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-erlang-cookie\") pod \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.605136 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-config-data\") pod \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.605186 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-server-conf\") pod \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.605211 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\" (UID: \"9e79eab6-cf02-4c69-99bd-2f3512c809f3\") " Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.605498 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9e79eab6-cf02-4c69-99bd-2f3512c809f3" (UID: "9e79eab6-cf02-4c69-99bd-2f3512c809f3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.606227 4784 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.606389 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9e79eab6-cf02-4c69-99bd-2f3512c809f3" (UID: "9e79eab6-cf02-4c69-99bd-2f3512c809f3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.608544 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9e79eab6-cf02-4c69-99bd-2f3512c809f3" (UID: "9e79eab6-cf02-4c69-99bd-2f3512c809f3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.612977 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "9e79eab6-cf02-4c69-99bd-2f3512c809f3" (UID: "9e79eab6-cf02-4c69-99bd-2f3512c809f3"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.613059 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-kube-api-access-hkqgh" (OuterVolumeSpecName: "kube-api-access-hkqgh") pod "9e79eab6-cf02-4c69-99bd-2f3512c809f3" (UID: "9e79eab6-cf02-4c69-99bd-2f3512c809f3"). InnerVolumeSpecName "kube-api-access-hkqgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.622392 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e79eab6-cf02-4c69-99bd-2f3512c809f3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9e79eab6-cf02-4c69-99bd-2f3512c809f3" (UID: "9e79eab6-cf02-4c69-99bd-2f3512c809f3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.622530 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9e79eab6-cf02-4c69-99bd-2f3512c809f3-pod-info" (OuterVolumeSpecName: "pod-info") pod "9e79eab6-cf02-4c69-99bd-2f3512c809f3" (UID: "9e79eab6-cf02-4c69-99bd-2f3512c809f3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.628772 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9e79eab6-cf02-4c69-99bd-2f3512c809f3" (UID: "9e79eab6-cf02-4c69-99bd-2f3512c809f3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.710257 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkqgh\" (UniqueName: \"kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-kube-api-access-hkqgh\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.710323 4784 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.710354 4784 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e79eab6-cf02-4c69-99bd-2f3512c809f3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.710364 4784 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e79eab6-cf02-4c69-99bd-2f3512c809f3-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.710373 4784 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.710382 4784 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.710418 4784 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.712571 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-config-data" (OuterVolumeSpecName: "config-data") pod "9e79eab6-cf02-4c69-99bd-2f3512c809f3" (UID: "9e79eab6-cf02-4c69-99bd-2f3512c809f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.747034 4784 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.756992 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-server-conf" (OuterVolumeSpecName: "server-conf") pod "9e79eab6-cf02-4c69-99bd-2f3512c809f3" (UID: "9e79eab6-cf02-4c69-99bd-2f3512c809f3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.812614 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.812642 4784 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e79eab6-cf02-4c69-99bd-2f3512c809f3-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.812667 4784 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.815109 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e79eab6-cf02-4c69-99bd-2f3512c809f3","Type":"ContainerDied","Data":"bde2b7726d6cec3065359ae895f20b7d9c28facc9ecaa12e10a1d2f8510e3391"} Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.815211 4784 scope.go:117] "RemoveContainer" containerID="fbbccf065bf4ffbd21909156a14260a505558fbf2525c2e87f43df99e4ee0a5d" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.815455 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.824300 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9e79eab6-cf02-4c69-99bd-2f3512c809f3" (UID: "9e79eab6-cf02-4c69-99bd-2f3512c809f3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.842860 4784 scope.go:117] "RemoveContainer" containerID="108ed665071583075faa37237a76c5edf56e95c94290ca4776fc25ebc9dafb9e" Jan 23 06:46:29 crc kubenswrapper[4784]: I0123 06:46:29.914930 4784 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e79eab6-cf02-4c69-99bd-2f3512c809f3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.166981 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.243516 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.278062 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:46:30 crc kubenswrapper[4784]: E0123 06:46:30.278816 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd4440bf-fb03-421e-8156-49d3c1a7cc6c" containerName="registry-server" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.278854 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd4440bf-fb03-421e-8156-49d3c1a7cc6c" containerName="registry-server" Jan 23 06:46:30 crc kubenswrapper[4784]: E0123 06:46:30.278875 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e79eab6-cf02-4c69-99bd-2f3512c809f3" containerName="rabbitmq" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.278886 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e79eab6-cf02-4c69-99bd-2f3512c809f3" containerName="rabbitmq" Jan 23 06:46:30 crc kubenswrapper[4784]: E0123 06:46:30.278914 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd4440bf-fb03-421e-8156-49d3c1a7cc6c" containerName="extract-content" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.278943 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd4440bf-fb03-421e-8156-49d3c1a7cc6c" containerName="extract-content" Jan 23 06:46:30 crc kubenswrapper[4784]: E0123 06:46:30.278957 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd4440bf-fb03-421e-8156-49d3c1a7cc6c" containerName="extract-utilities" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.278965 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd4440bf-fb03-421e-8156-49d3c1a7cc6c" containerName="extract-utilities" Jan 23 06:46:30 crc kubenswrapper[4784]: E0123 06:46:30.279006 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e79eab6-cf02-4c69-99bd-2f3512c809f3" containerName="setup-container" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.279017 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e79eab6-cf02-4c69-99bd-2f3512c809f3" containerName="setup-container" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.279324 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e79eab6-cf02-4c69-99bd-2f3512c809f3" containerName="rabbitmq" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.279356 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd4440bf-fb03-421e-8156-49d3c1a7cc6c" containerName="registry-server" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.280839 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.285461 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.285803 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-wnnn7" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.288961 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.289287 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.289456 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.289613 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.290057 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.295610 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.433358 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f65404b7-5dd6-409f-87c1-633679f2d5cb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.433850 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f65404b7-5dd6-409f-87c1-633679f2d5cb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.433972 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f65404b7-5dd6-409f-87c1-633679f2d5cb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.434128 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f65404b7-5dd6-409f-87c1-633679f2d5cb-config-data\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.434310 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f65404b7-5dd6-409f-87c1-633679f2d5cb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.434373 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f65404b7-5dd6-409f-87c1-633679f2d5cb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.434424 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.434671 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rq8q\" (UniqueName: \"kubernetes.io/projected/f65404b7-5dd6-409f-87c1-633679f2d5cb-kube-api-access-4rq8q\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.434736 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f65404b7-5dd6-409f-87c1-633679f2d5cb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.434923 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f65404b7-5dd6-409f-87c1-633679f2d5cb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.435005 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f65404b7-5dd6-409f-87c1-633679f2d5cb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.536650 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f65404b7-5dd6-409f-87c1-633679f2d5cb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.536720 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f65404b7-5dd6-409f-87c1-633679f2d5cb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.536796 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f65404b7-5dd6-409f-87c1-633679f2d5cb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.536822 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f65404b7-5dd6-409f-87c1-633679f2d5cb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.536865 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f65404b7-5dd6-409f-87c1-633679f2d5cb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.536892 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f65404b7-5dd6-409f-87c1-633679f2d5cb-config-data\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.536951 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f65404b7-5dd6-409f-87c1-633679f2d5cb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.536973 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f65404b7-5dd6-409f-87c1-633679f2d5cb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.536997 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.537566 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.538006 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f65404b7-5dd6-409f-87c1-633679f2d5cb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.538145 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f65404b7-5dd6-409f-87c1-633679f2d5cb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.538169 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f65404b7-5dd6-409f-87c1-633679f2d5cb-config-data\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.538411 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rq8q\" (UniqueName: \"kubernetes.io/projected/f65404b7-5dd6-409f-87c1-633679f2d5cb-kube-api-access-4rq8q\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.538465 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f65404b7-5dd6-409f-87c1-633679f2d5cb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.539515 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f65404b7-5dd6-409f-87c1-633679f2d5cb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.541446 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f65404b7-5dd6-409f-87c1-633679f2d5cb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.544396 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f65404b7-5dd6-409f-87c1-633679f2d5cb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.546940 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f65404b7-5dd6-409f-87c1-633679f2d5cb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.547653 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f65404b7-5dd6-409f-87c1-633679f2d5cb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.564633 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f65404b7-5dd6-409f-87c1-633679f2d5cb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.564834 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rq8q\" (UniqueName: \"kubernetes.io/projected/f65404b7-5dd6-409f-87c1-633679f2d5cb-kube-api-access-4rq8q\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.589614 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"f65404b7-5dd6-409f-87c1-633679f2d5cb\") " pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.620291 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.842281 4784 generic.go:334] "Generic (PLEG): container finished" podID="9e37da8a-e964-4f8b-aacc-2937130e2e7b" containerID="bab1e179a3cb60088fb59145c12918de242c44f03ae18ef36400199e95e6c870" exitCode=0 Jan 23 06:46:30 crc kubenswrapper[4784]: I0123 06:46:30.842380 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e37da8a-e964-4f8b-aacc-2937130e2e7b","Type":"ContainerDied","Data":"bab1e179a3cb60088fb59145c12918de242c44f03ae18ef36400199e95e6c870"} Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.217383 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.303076 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e79eab6-cf02-4c69-99bd-2f3512c809f3" path="/var/lib/kubelet/pods/9e79eab6-cf02-4c69-99bd-2f3512c809f3/volumes" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.626526 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.837596 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjsld\" (UniqueName: \"kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-kube-api-access-jjsld\") pod \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.837709 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-plugins\") pod \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.837798 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.838053 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e37da8a-e964-4f8b-aacc-2937130e2e7b-pod-info\") pod \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.838110 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e37da8a-e964-4f8b-aacc-2937130e2e7b-erlang-cookie-secret\") pod \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.838195 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-server-conf\") pod \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.838262 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-plugins-conf\") pod \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.838305 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-tls\") pod \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.838351 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-erlang-cookie\") pod \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.838470 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-config-data\") pod \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.838499 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-confd\") pod \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\" (UID: \"9e37da8a-e964-4f8b-aacc-2937130e2e7b\") " Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.841815 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9e37da8a-e964-4f8b-aacc-2937130e2e7b" (UID: "9e37da8a-e964-4f8b-aacc-2937130e2e7b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.845626 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9e37da8a-e964-4f8b-aacc-2937130e2e7b" (UID: "9e37da8a-e964-4f8b-aacc-2937130e2e7b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.848134 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9e37da8a-e964-4f8b-aacc-2937130e2e7b" (UID: "9e37da8a-e964-4f8b-aacc-2937130e2e7b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.850377 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-kube-api-access-jjsld" (OuterVolumeSpecName: "kube-api-access-jjsld") pod "9e37da8a-e964-4f8b-aacc-2937130e2e7b" (UID: "9e37da8a-e964-4f8b-aacc-2937130e2e7b"). InnerVolumeSpecName "kube-api-access-jjsld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.862047 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9e37da8a-e964-4f8b-aacc-2937130e2e7b" (UID: "9e37da8a-e964-4f8b-aacc-2937130e2e7b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.865518 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "9e37da8a-e964-4f8b-aacc-2937130e2e7b" (UID: "9e37da8a-e964-4f8b-aacc-2937130e2e7b"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.866544 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9e37da8a-e964-4f8b-aacc-2937130e2e7b-pod-info" (OuterVolumeSpecName: "pod-info") pod "9e37da8a-e964-4f8b-aacc-2937130e2e7b" (UID: "9e37da8a-e964-4f8b-aacc-2937130e2e7b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.866618 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e37da8a-e964-4f8b-aacc-2937130e2e7b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9e37da8a-e964-4f8b-aacc-2937130e2e7b" (UID: "9e37da8a-e964-4f8b-aacc-2937130e2e7b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.872996 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f65404b7-5dd6-409f-87c1-633679f2d5cb","Type":"ContainerStarted","Data":"683b6171c315fa09c9f5b17e417e36a3ff934d9e6c1ed60555246e71aff4cad7"} Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.875620 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9e37da8a-e964-4f8b-aacc-2937130e2e7b","Type":"ContainerDied","Data":"1ee8cfee12fbbce40e85fbc58a30a10ff8b2da4298a5135b625b9a0d82b12e56"} Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.875692 4784 scope.go:117] "RemoveContainer" containerID="bab1e179a3cb60088fb59145c12918de242c44f03ae18ef36400199e95e6c870" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.876124 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.915735 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-config-data" (OuterVolumeSpecName: "config-data") pod "9e37da8a-e964-4f8b-aacc-2937130e2e7b" (UID: "9e37da8a-e964-4f8b-aacc-2937130e2e7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.952796 4784 scope.go:117] "RemoveContainer" containerID="6c523486f92879d29f8c12e1686060624335e68261e81600c144abb26218a886" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.962049 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-server-conf" (OuterVolumeSpecName: "server-conf") pod "9e37da8a-e964-4f8b-aacc-2937130e2e7b" (UID: "9e37da8a-e964-4f8b-aacc-2937130e2e7b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.965332 4784 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e37da8a-e964-4f8b-aacc-2937130e2e7b-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.965373 4784 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e37da8a-e964-4f8b-aacc-2937130e2e7b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.965387 4784 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.965396 4784 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.965409 4784 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.965423 4784 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.965432 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e37da8a-e964-4f8b-aacc-2937130e2e7b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.965440 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjsld\" (UniqueName: \"kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-kube-api-access-jjsld\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.965448 4784 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:31 crc kubenswrapper[4784]: I0123 06:46:31.965488 4784 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.002070 4784 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.046796 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9e37da8a-e964-4f8b-aacc-2937130e2e7b" (UID: "9e37da8a-e964-4f8b-aacc-2937130e2e7b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.067956 4784 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e37da8a-e964-4f8b-aacc-2937130e2e7b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.068002 4784 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.218209 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.228819 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.246088 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:46:32 crc kubenswrapper[4784]: E0123 06:46:32.246651 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e37da8a-e964-4f8b-aacc-2937130e2e7b" containerName="setup-container" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.246669 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e37da8a-e964-4f8b-aacc-2937130e2e7b" containerName="setup-container" Jan 23 06:46:32 crc kubenswrapper[4784]: E0123 06:46:32.246678 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e37da8a-e964-4f8b-aacc-2937130e2e7b" containerName="rabbitmq" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.246685 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e37da8a-e964-4f8b-aacc-2937130e2e7b" containerName="rabbitmq" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.246920 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e37da8a-e964-4f8b-aacc-2937130e2e7b" containerName="rabbitmq" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.248268 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.251216 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.251555 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fpvh8" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.251821 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.252027 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.252348 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.254448 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.254563 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.267137 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.374963 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807272ae-7f38-45f1-acd2-984a1a1840d8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.375042 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807272ae-7f38-45f1-acd2-984a1a1840d8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.375095 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qg5q\" (UniqueName: \"kubernetes.io/projected/807272ae-7f38-45f1-acd2-984a1a1840d8-kube-api-access-9qg5q\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.375135 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807272ae-7f38-45f1-acd2-984a1a1840d8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.375168 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807272ae-7f38-45f1-acd2-984a1a1840d8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.375241 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807272ae-7f38-45f1-acd2-984a1a1840d8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.375300 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807272ae-7f38-45f1-acd2-984a1a1840d8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.375348 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.375449 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807272ae-7f38-45f1-acd2-984a1a1840d8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.375575 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807272ae-7f38-45f1-acd2-984a1a1840d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.375647 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807272ae-7f38-45f1-acd2-984a1a1840d8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.478110 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807272ae-7f38-45f1-acd2-984a1a1840d8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.478232 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807272ae-7f38-45f1-acd2-984a1a1840d8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.478280 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.478356 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807272ae-7f38-45f1-acd2-984a1a1840d8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.478453 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807272ae-7f38-45f1-acd2-984a1a1840d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.478508 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807272ae-7f38-45f1-acd2-984a1a1840d8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.478551 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807272ae-7f38-45f1-acd2-984a1a1840d8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.478594 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807272ae-7f38-45f1-acd2-984a1a1840d8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.478640 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qg5q\" (UniqueName: \"kubernetes.io/projected/807272ae-7f38-45f1-acd2-984a1a1840d8-kube-api-access-9qg5q\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.478680 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807272ae-7f38-45f1-acd2-984a1a1840d8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.478709 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807272ae-7f38-45f1-acd2-984a1a1840d8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.479669 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/807272ae-7f38-45f1-acd2-984a1a1840d8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.480347 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/807272ae-7f38-45f1-acd2-984a1a1840d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.480432 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.480642 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/807272ae-7f38-45f1-acd2-984a1a1840d8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.481376 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/807272ae-7f38-45f1-acd2-984a1a1840d8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.483302 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/807272ae-7f38-45f1-acd2-984a1a1840d8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.571114 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/807272ae-7f38-45f1-acd2-984a1a1840d8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.571784 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/807272ae-7f38-45f1-acd2-984a1a1840d8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.571910 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/807272ae-7f38-45f1-acd2-984a1a1840d8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.573274 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qg5q\" (UniqueName: \"kubernetes.io/projected/807272ae-7f38-45f1-acd2-984a1a1840d8-kube-api-access-9qg5q\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.573351 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/807272ae-7f38-45f1-acd2-984a1a1840d8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.621594 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"807272ae-7f38-45f1-acd2-984a1a1840d8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:32 crc kubenswrapper[4784]: I0123 06:46:32.877320 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:46:33 crc kubenswrapper[4784]: I0123 06:46:33.271735 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e37da8a-e964-4f8b-aacc-2937130e2e7b" path="/var/lib/kubelet/pods/9e37da8a-e964-4f8b-aacc-2937130e2e7b/volumes" Jan 23 06:46:33 crc kubenswrapper[4784]: I0123 06:46:33.377257 4784 scope.go:117] "RemoveContainer" containerID="afc82229e0f8c8f306de6a444605332a7e502c7f53ba0c937c7bcd09c3ed8c63" Jan 23 06:46:33 crc kubenswrapper[4784]: I0123 06:46:33.418214 4784 scope.go:117] "RemoveContainer" containerID="bc48dc4b3dd963d5b237d173e341920e960ec9a8cec18c2764eebcb89441ebf8" Jan 23 06:46:33 crc kubenswrapper[4784]: I0123 06:46:33.438695 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:46:33 crc kubenswrapper[4784]: W0123 06:46:33.605267 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod807272ae_7f38_45f1_acd2_984a1a1840d8.slice/crio-e390f60472891ec2caefcf5e103748baea08a0c6feaa051654c62a40956d6faa WatchSource:0}: Error finding container e390f60472891ec2caefcf5e103748baea08a0c6feaa051654c62a40956d6faa: Status 404 returned error can't find the container with id e390f60472891ec2caefcf5e103748baea08a0c6feaa051654c62a40956d6faa Jan 23 06:46:33 crc kubenswrapper[4784]: I0123 06:46:33.948878 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"807272ae-7f38-45f1-acd2-984a1a1840d8","Type":"ContainerStarted","Data":"e390f60472891ec2caefcf5e103748baea08a0c6feaa051654c62a40956d6faa"} Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.540651 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-6h8qs"] Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.543868 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.549926 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.558619 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-6h8qs"] Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.645303 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.645380 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wjgw\" (UniqueName: \"kubernetes.io/projected/c15cbdc1-7ce9-4f95-8125-5d021cac3703-kube-api-access-5wjgw\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.645417 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.645437 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.645576 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-config\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.645611 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.645801 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.748116 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.748172 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.748280 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-config\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.748306 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.748335 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.748395 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.748444 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wjgw\" (UniqueName: \"kubernetes.io/projected/c15cbdc1-7ce9-4f95-8125-5d021cac3703-kube-api-access-5wjgw\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.749684 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.749707 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.749777 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.749798 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.750035 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-config\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.750397 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.770528 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wjgw\" (UniqueName: \"kubernetes.io/projected/c15cbdc1-7ce9-4f95-8125-5d021cac3703-kube-api-access-5wjgw\") pod \"dnsmasq-dns-79bd4cc8c9-6h8qs\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.870507 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:34 crc kubenswrapper[4784]: I0123 06:46:34.987947 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f65404b7-5dd6-409f-87c1-633679f2d5cb","Type":"ContainerStarted","Data":"21c279abd7bced8474d0caea4791c94895a80df1d90f3168e0620b0b5038e55f"} Jan 23 06:46:35 crc kubenswrapper[4784]: I0123 06:46:35.401294 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-6h8qs"] Jan 23 06:46:36 crc kubenswrapper[4784]: I0123 06:46:36.001241 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" event={"ID":"c15cbdc1-7ce9-4f95-8125-5d021cac3703","Type":"ContainerStarted","Data":"0d678ace8b49a2f5c1a1d38b5c7abfa20c38bf87785cd4dd2e06076fa3e809e5"} Jan 23 06:46:37 crc kubenswrapper[4784]: I0123 06:46:37.015650 4784 generic.go:334] "Generic (PLEG): container finished" podID="c15cbdc1-7ce9-4f95-8125-5d021cac3703" containerID="6845955ba0e41105852b5e66e5c45be4adb2133e0a67ce765beeb1c4bf59391c" exitCode=0 Jan 23 06:46:37 crc kubenswrapper[4784]: I0123 06:46:37.015856 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" event={"ID":"c15cbdc1-7ce9-4f95-8125-5d021cac3703","Type":"ContainerDied","Data":"6845955ba0e41105852b5e66e5c45be4adb2133e0a67ce765beeb1c4bf59391c"} Jan 23 06:46:37 crc kubenswrapper[4784]: I0123 06:46:37.031135 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"807272ae-7f38-45f1-acd2-984a1a1840d8","Type":"ContainerStarted","Data":"8a1ebed1b53b172c5aa55dc6245acdbd984b9c068024d12ccfb4cdd54b9e68fd"} Jan 23 06:46:38 crc kubenswrapper[4784]: I0123 06:46:38.046156 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" event={"ID":"c15cbdc1-7ce9-4f95-8125-5d021cac3703","Type":"ContainerStarted","Data":"65ece411910fe9866306ba698d8230b07ff26bb5e20227a678876995860bb76b"} Jan 23 06:46:39 crc kubenswrapper[4784]: I0123 06:46:39.074502 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:44 crc kubenswrapper[4784]: I0123 06:46:44.871975 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:44 crc kubenswrapper[4784]: I0123 06:46:44.906287 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" podStartSLOduration=10.906242069 podStartE2EDuration="10.906242069s" podCreationTimestamp="2026-01-23 06:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:46:38.079035829 +0000 UTC m=+1601.311543803" watchObservedRunningTime="2026-01-23 06:46:44.906242069 +0000 UTC m=+1608.138750083" Jan 23 06:46:44 crc kubenswrapper[4784]: I0123 06:46:44.938176 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-q5j59"] Jan 23 06:46:44 crc kubenswrapper[4784]: I0123 06:46:44.938512 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" podUID="e6c0eaf9-bfa3-491c-a219-6450089b378e" containerName="dnsmasq-dns" containerID="cri-o://3f8542c3412e5fae107d8041e6b6112a777575f90fd87e7c5c3b5b9f574993a6" gracePeriod=10 Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.166371 4784 generic.go:334] "Generic (PLEG): container finished" podID="e6c0eaf9-bfa3-491c-a219-6450089b378e" containerID="3f8542c3412e5fae107d8041e6b6112a777575f90fd87e7c5c3b5b9f574993a6" exitCode=0 Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.166805 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" event={"ID":"e6c0eaf9-bfa3-491c-a219-6450089b378e","Type":"ContainerDied","Data":"3f8542c3412e5fae107d8041e6b6112a777575f90fd87e7c5c3b5b9f574993a6"} Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.286926 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-s7ths"] Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.313574 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.326157 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-s7ths"] Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.418108 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.418208 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.418270 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcrnf\" (UniqueName: \"kubernetes.io/projected/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-kube-api-access-qcrnf\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.418406 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.418447 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.418474 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.418523 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-config\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.523516 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.523603 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.523644 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcrnf\" (UniqueName: \"kubernetes.io/projected/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-kube-api-access-qcrnf\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.523710 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.523731 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.523774 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.523805 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-config\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.524808 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-config\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.525414 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.526409 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.529386 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.534569 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.535061 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.568450 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcrnf\" (UniqueName: \"kubernetes.io/projected/1b313cca-4d7d-435b-9c85-8ca53f4b4bf1-kube-api-access-qcrnf\") pod \"dnsmasq-dns-6cd9bffc9-s7ths\" (UID: \"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1\") " pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.672113 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.760665 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bgqnc"] Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.772225 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.819201 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bgqnc"] Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.966105 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93758036-c735-43ff-9ae8-1dd2f95c7e71-utilities\") pod \"community-operators-bgqnc\" (UID: \"93758036-c735-43ff-9ae8-1dd2f95c7e71\") " pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.972365 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n6mg\" (UniqueName: \"kubernetes.io/projected/93758036-c735-43ff-9ae8-1dd2f95c7e71-kube-api-access-8n6mg\") pod \"community-operators-bgqnc\" (UID: \"93758036-c735-43ff-9ae8-1dd2f95c7e71\") " pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:45 crc kubenswrapper[4784]: I0123 06:46:45.972455 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93758036-c735-43ff-9ae8-1dd2f95c7e71-catalog-content\") pod \"community-operators-bgqnc\" (UID: \"93758036-c735-43ff-9ae8-1dd2f95c7e71\") " pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.011602 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.075529 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4qgk\" (UniqueName: \"kubernetes.io/projected/e6c0eaf9-bfa3-491c-a219-6450089b378e-kube-api-access-k4qgk\") pod \"e6c0eaf9-bfa3-491c-a219-6450089b378e\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.075716 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-ovsdbserver-sb\") pod \"e6c0eaf9-bfa3-491c-a219-6450089b378e\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.075906 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-dns-svc\") pod \"e6c0eaf9-bfa3-491c-a219-6450089b378e\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.076067 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-config\") pod \"e6c0eaf9-bfa3-491c-a219-6450089b378e\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.076109 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-ovsdbserver-nb\") pod \"e6c0eaf9-bfa3-491c-a219-6450089b378e\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.076178 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-dns-swift-storage-0\") pod \"e6c0eaf9-bfa3-491c-a219-6450089b378e\" (UID: \"e6c0eaf9-bfa3-491c-a219-6450089b378e\") " Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.076659 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n6mg\" (UniqueName: \"kubernetes.io/projected/93758036-c735-43ff-9ae8-1dd2f95c7e71-kube-api-access-8n6mg\") pod \"community-operators-bgqnc\" (UID: \"93758036-c735-43ff-9ae8-1dd2f95c7e71\") " pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.076705 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93758036-c735-43ff-9ae8-1dd2f95c7e71-catalog-content\") pod \"community-operators-bgqnc\" (UID: \"93758036-c735-43ff-9ae8-1dd2f95c7e71\") " pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.076874 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93758036-c735-43ff-9ae8-1dd2f95c7e71-utilities\") pod \"community-operators-bgqnc\" (UID: \"93758036-c735-43ff-9ae8-1dd2f95c7e71\") " pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.078418 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93758036-c735-43ff-9ae8-1dd2f95c7e71-utilities\") pod \"community-operators-bgqnc\" (UID: \"93758036-c735-43ff-9ae8-1dd2f95c7e71\") " pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.080839 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93758036-c735-43ff-9ae8-1dd2f95c7e71-catalog-content\") pod \"community-operators-bgqnc\" (UID: \"93758036-c735-43ff-9ae8-1dd2f95c7e71\") " pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.104030 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6c0eaf9-bfa3-491c-a219-6450089b378e-kube-api-access-k4qgk" (OuterVolumeSpecName: "kube-api-access-k4qgk") pod "e6c0eaf9-bfa3-491c-a219-6450089b378e" (UID: "e6c0eaf9-bfa3-491c-a219-6450089b378e"). InnerVolumeSpecName "kube-api-access-k4qgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.105921 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n6mg\" (UniqueName: \"kubernetes.io/projected/93758036-c735-43ff-9ae8-1dd2f95c7e71-kube-api-access-8n6mg\") pod \"community-operators-bgqnc\" (UID: \"93758036-c735-43ff-9ae8-1dd2f95c7e71\") " pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.168575 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-config" (OuterVolumeSpecName: "config") pod "e6c0eaf9-bfa3-491c-a219-6450089b378e" (UID: "e6c0eaf9-bfa3-491c-a219-6450089b378e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.168787 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e6c0eaf9-bfa3-491c-a219-6450089b378e" (UID: "e6c0eaf9-bfa3-491c-a219-6450089b378e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.184965 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.185016 4784 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.185029 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4qgk\" (UniqueName: \"kubernetes.io/projected/e6c0eaf9-bfa3-491c-a219-6450089b378e-kube-api-access-k4qgk\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.220586 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e6c0eaf9-bfa3-491c-a219-6450089b378e" (UID: "e6c0eaf9-bfa3-491c-a219-6450089b378e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.228009 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" event={"ID":"e6c0eaf9-bfa3-491c-a219-6450089b378e","Type":"ContainerDied","Data":"ead164944e353114b4f88fe5edf81095d6745bd3f41c867d172bce3f27665a66"} Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.228093 4784 scope.go:117] "RemoveContainer" containerID="3f8542c3412e5fae107d8041e6b6112a777575f90fd87e7c5c3b5b9f574993a6" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.228313 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-q5j59" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.245189 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e6c0eaf9-bfa3-491c-a219-6450089b378e" (UID: "e6c0eaf9-bfa3-491c-a219-6450089b378e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.295366 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.296633 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.296649 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.330837 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e6c0eaf9-bfa3-491c-a219-6450089b378e" (UID: "e6c0eaf9-bfa3-491c-a219-6450089b378e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.391994 4784 scope.go:117] "RemoveContainer" containerID="78cf49542c186cc114b93c127adca1a3480906f0462ed0f3efb59e6be57ef153" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.398843 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6c0eaf9-bfa3-491c-a219-6450089b378e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.424305 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-s7ths"] Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.642847 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-q5j59"] Jan 23 06:46:46 crc kubenswrapper[4784]: I0123 06:46:46.658235 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-q5j59"] Jan 23 06:46:47 crc kubenswrapper[4784]: W0123 06:46:47.048049 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93758036_c735_43ff_9ae8_1dd2f95c7e71.slice/crio-5d721aacd728821415f62a07753bcbefdec5aa2c9f8d2f2845a456ec7c33cc8e WatchSource:0}: Error finding container 5d721aacd728821415f62a07753bcbefdec5aa2c9f8d2f2845a456ec7c33cc8e: Status 404 returned error can't find the container with id 5d721aacd728821415f62a07753bcbefdec5aa2c9f8d2f2845a456ec7c33cc8e Jan 23 06:46:47 crc kubenswrapper[4784]: I0123 06:46:47.050750 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bgqnc"] Jan 23 06:46:47 crc kubenswrapper[4784]: I0123 06:46:47.244100 4784 generic.go:334] "Generic (PLEG): container finished" podID="1b313cca-4d7d-435b-9c85-8ca53f4b4bf1" containerID="4bdf5371af9d24aa49e7cf70827c099b10836364acd660e04141847ee6462026" exitCode=0 Jan 23 06:46:47 crc kubenswrapper[4784]: I0123 06:46:47.244213 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" event={"ID":"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1","Type":"ContainerDied","Data":"4bdf5371af9d24aa49e7cf70827c099b10836364acd660e04141847ee6462026"} Jan 23 06:46:47 crc kubenswrapper[4784]: I0123 06:46:47.244294 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" event={"ID":"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1","Type":"ContainerStarted","Data":"41666986ed1183f4c4802ed5a84f14d6dfaeab8814ace05ac8fb68ae13c534a7"} Jan 23 06:46:47 crc kubenswrapper[4784]: I0123 06:46:47.248455 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgqnc" event={"ID":"93758036-c735-43ff-9ae8-1dd2f95c7e71","Type":"ContainerStarted","Data":"5d721aacd728821415f62a07753bcbefdec5aa2c9f8d2f2845a456ec7c33cc8e"} Jan 23 06:46:47 crc kubenswrapper[4784]: I0123 06:46:47.270586 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6c0eaf9-bfa3-491c-a219-6450089b378e" path="/var/lib/kubelet/pods/e6c0eaf9-bfa3-491c-a219-6450089b378e/volumes" Jan 23 06:46:48 crc kubenswrapper[4784]: I0123 06:46:48.261280 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" event={"ID":"1b313cca-4d7d-435b-9c85-8ca53f4b4bf1","Type":"ContainerStarted","Data":"f94c283a28734d6d254600692a2d2bf4da2b2130dd82449f485aaf655ba7616d"} Jan 23 06:46:48 crc kubenswrapper[4784]: I0123 06:46:48.261863 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:48 crc kubenswrapper[4784]: I0123 06:46:48.263685 4784 generic.go:334] "Generic (PLEG): container finished" podID="93758036-c735-43ff-9ae8-1dd2f95c7e71" containerID="a975ce331cc7827d3fa606c590f022a940697b4732bf746faba00bf4b6a3e3d3" exitCode=0 Jan 23 06:46:48 crc kubenswrapper[4784]: I0123 06:46:48.263783 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgqnc" event={"ID":"93758036-c735-43ff-9ae8-1dd2f95c7e71","Type":"ContainerDied","Data":"a975ce331cc7827d3fa606c590f022a940697b4732bf746faba00bf4b6a3e3d3"} Jan 23 06:46:48 crc kubenswrapper[4784]: I0123 06:46:48.305171 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" podStartSLOduration=3.305145212 podStartE2EDuration="3.305145212s" podCreationTimestamp="2026-01-23 06:46:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:46:48.288569014 +0000 UTC m=+1611.521076978" watchObservedRunningTime="2026-01-23 06:46:48.305145212 +0000 UTC m=+1611.537653186" Jan 23 06:46:50 crc kubenswrapper[4784]: I0123 06:46:50.295555 4784 generic.go:334] "Generic (PLEG): container finished" podID="93758036-c735-43ff-9ae8-1dd2f95c7e71" containerID="853d5e9d25ae13526c1352113d4d9952d243c46b955a65b89a06f35d4b1470dd" exitCode=0 Jan 23 06:46:50 crc kubenswrapper[4784]: I0123 06:46:50.295691 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgqnc" event={"ID":"93758036-c735-43ff-9ae8-1dd2f95c7e71","Type":"ContainerDied","Data":"853d5e9d25ae13526c1352113d4d9952d243c46b955a65b89a06f35d4b1470dd"} Jan 23 06:46:51 crc kubenswrapper[4784]: I0123 06:46:51.317432 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgqnc" event={"ID":"93758036-c735-43ff-9ae8-1dd2f95c7e71","Type":"ContainerStarted","Data":"e860af3f0d45f5d944974d7e993679f89e0bde07ecad65d242b18270fcb996a2"} Jan 23 06:46:51 crc kubenswrapper[4784]: I0123 06:46:51.353347 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bgqnc" podStartSLOduration=3.630644905 podStartE2EDuration="6.35331635s" podCreationTimestamp="2026-01-23 06:46:45 +0000 UTC" firstStartedPulling="2026-01-23 06:46:48.266948562 +0000 UTC m=+1611.499456536" lastFinishedPulling="2026-01-23 06:46:50.989620007 +0000 UTC m=+1614.222127981" observedRunningTime="2026-01-23 06:46:51.346583324 +0000 UTC m=+1614.579091308" watchObservedRunningTime="2026-01-23 06:46:51.35331635 +0000 UTC m=+1614.585824324" Jan 23 06:46:53 crc kubenswrapper[4784]: I0123 06:46:53.603256 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:46:53 crc kubenswrapper[4784]: I0123 06:46:53.604046 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:46:53 crc kubenswrapper[4784]: I0123 06:46:53.605134 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:46:53 crc kubenswrapper[4784]: I0123 06:46:53.606374 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:46:53 crc kubenswrapper[4784]: I0123 06:46:53.606466 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" gracePeriod=600 Jan 23 06:46:53 crc kubenswrapper[4784]: E0123 06:46:53.787926 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:46:54 crc kubenswrapper[4784]: I0123 06:46:54.365052 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" exitCode=0 Jan 23 06:46:54 crc kubenswrapper[4784]: I0123 06:46:54.365133 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49"} Jan 23 06:46:54 crc kubenswrapper[4784]: I0123 06:46:54.365220 4784 scope.go:117] "RemoveContainer" containerID="99f5c7da473bb191e287690718f667aa1ba0bc87b545db802bd06bfff3e98701" Jan 23 06:46:54 crc kubenswrapper[4784]: I0123 06:46:54.366223 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:46:54 crc kubenswrapper[4784]: E0123 06:46:54.366622 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:46:55 crc kubenswrapper[4784]: I0123 06:46:55.674214 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cd9bffc9-s7ths" Jan 23 06:46:55 crc kubenswrapper[4784]: I0123 06:46:55.827194 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-6h8qs"] Jan 23 06:46:55 crc kubenswrapper[4784]: I0123 06:46:55.827953 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" podUID="c15cbdc1-7ce9-4f95-8125-5d021cac3703" containerName="dnsmasq-dns" containerID="cri-o://65ece411910fe9866306ba698d8230b07ff26bb5e20227a678876995860bb76b" gracePeriod=10 Jan 23 06:46:56 crc kubenswrapper[4784]: I0123 06:46:56.298990 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:56 crc kubenswrapper[4784]: I0123 06:46:56.300094 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:56 crc kubenswrapper[4784]: I0123 06:46:56.349594 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:56 crc kubenswrapper[4784]: I0123 06:46:56.447248 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:56 crc kubenswrapper[4784]: I0123 06:46:56.600638 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bgqnc"] Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.294105 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.326467 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wjgw\" (UniqueName: \"kubernetes.io/projected/c15cbdc1-7ce9-4f95-8125-5d021cac3703-kube-api-access-5wjgw\") pod \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.326532 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-openstack-edpm-ipam\") pod \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.326620 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-dns-svc\") pod \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.326663 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-config\") pod \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.327843 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-dns-swift-storage-0\") pod \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.327901 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-ovsdbserver-nb\") pod \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.328052 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-ovsdbserver-sb\") pod \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\" (UID: \"c15cbdc1-7ce9-4f95-8125-5d021cac3703\") " Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.356352 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c15cbdc1-7ce9-4f95-8125-5d021cac3703-kube-api-access-5wjgw" (OuterVolumeSpecName: "kube-api-access-5wjgw") pod "c15cbdc1-7ce9-4f95-8125-5d021cac3703" (UID: "c15cbdc1-7ce9-4f95-8125-5d021cac3703"). InnerVolumeSpecName "kube-api-access-5wjgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.421370 4784 generic.go:334] "Generic (PLEG): container finished" podID="c15cbdc1-7ce9-4f95-8125-5d021cac3703" containerID="65ece411910fe9866306ba698d8230b07ff26bb5e20227a678876995860bb76b" exitCode=0 Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.421577 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.421660 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" event={"ID":"c15cbdc1-7ce9-4f95-8125-5d021cac3703","Type":"ContainerDied","Data":"65ece411910fe9866306ba698d8230b07ff26bb5e20227a678876995860bb76b"} Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.421690 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bgqnc" podUID="93758036-c735-43ff-9ae8-1dd2f95c7e71" containerName="registry-server" containerID="cri-o://e860af3f0d45f5d944974d7e993679f89e0bde07ecad65d242b18270fcb996a2" gracePeriod=2 Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.421710 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-6h8qs" event={"ID":"c15cbdc1-7ce9-4f95-8125-5d021cac3703","Type":"ContainerDied","Data":"0d678ace8b49a2f5c1a1d38b5c7abfa20c38bf87785cd4dd2e06076fa3e809e5"} Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.421779 4784 scope.go:117] "RemoveContainer" containerID="65ece411910fe9866306ba698d8230b07ff26bb5e20227a678876995860bb76b" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.433601 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wjgw\" (UniqueName: \"kubernetes.io/projected/c15cbdc1-7ce9-4f95-8125-5d021cac3703-kube-api-access-5wjgw\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.458005 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c15cbdc1-7ce9-4f95-8125-5d021cac3703" (UID: "c15cbdc1-7ce9-4f95-8125-5d021cac3703"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.475640 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-config" (OuterVolumeSpecName: "config") pod "c15cbdc1-7ce9-4f95-8125-5d021cac3703" (UID: "c15cbdc1-7ce9-4f95-8125-5d021cac3703"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.476615 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c15cbdc1-7ce9-4f95-8125-5d021cac3703" (UID: "c15cbdc1-7ce9-4f95-8125-5d021cac3703"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.478687 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c15cbdc1-7ce9-4f95-8125-5d021cac3703" (UID: "c15cbdc1-7ce9-4f95-8125-5d021cac3703"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.490247 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c15cbdc1-7ce9-4f95-8125-5d021cac3703" (UID: "c15cbdc1-7ce9-4f95-8125-5d021cac3703"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.501210 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c15cbdc1-7ce9-4f95-8125-5d021cac3703" (UID: "c15cbdc1-7ce9-4f95-8125-5d021cac3703"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.536464 4784 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.536633 4784 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.536696 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.536767 4784 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.536833 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.536886 4784 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c15cbdc1-7ce9-4f95-8125-5d021cac3703-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.555900 4784 scope.go:117] "RemoveContainer" containerID="6845955ba0e41105852b5e66e5c45be4adb2133e0a67ce765beeb1c4bf59391c" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.582189 4784 scope.go:117] "RemoveContainer" containerID="65ece411910fe9866306ba698d8230b07ff26bb5e20227a678876995860bb76b" Jan 23 06:46:58 crc kubenswrapper[4784]: E0123 06:46:58.582941 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65ece411910fe9866306ba698d8230b07ff26bb5e20227a678876995860bb76b\": container with ID starting with 65ece411910fe9866306ba698d8230b07ff26bb5e20227a678876995860bb76b not found: ID does not exist" containerID="65ece411910fe9866306ba698d8230b07ff26bb5e20227a678876995860bb76b" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.582982 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65ece411910fe9866306ba698d8230b07ff26bb5e20227a678876995860bb76b"} err="failed to get container status \"65ece411910fe9866306ba698d8230b07ff26bb5e20227a678876995860bb76b\": rpc error: code = NotFound desc = could not find container \"65ece411910fe9866306ba698d8230b07ff26bb5e20227a678876995860bb76b\": container with ID starting with 65ece411910fe9866306ba698d8230b07ff26bb5e20227a678876995860bb76b not found: ID does not exist" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.583010 4784 scope.go:117] "RemoveContainer" containerID="6845955ba0e41105852b5e66e5c45be4adb2133e0a67ce765beeb1c4bf59391c" Jan 23 06:46:58 crc kubenswrapper[4784]: E0123 06:46:58.583489 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6845955ba0e41105852b5e66e5c45be4adb2133e0a67ce765beeb1c4bf59391c\": container with ID starting with 6845955ba0e41105852b5e66e5c45be4adb2133e0a67ce765beeb1c4bf59391c not found: ID does not exist" containerID="6845955ba0e41105852b5e66e5c45be4adb2133e0a67ce765beeb1c4bf59391c" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.583519 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6845955ba0e41105852b5e66e5c45be4adb2133e0a67ce765beeb1c4bf59391c"} err="failed to get container status \"6845955ba0e41105852b5e66e5c45be4adb2133e0a67ce765beeb1c4bf59391c\": rpc error: code = NotFound desc = could not find container \"6845955ba0e41105852b5e66e5c45be4adb2133e0a67ce765beeb1c4bf59391c\": container with ID starting with 6845955ba0e41105852b5e66e5c45be4adb2133e0a67ce765beeb1c4bf59391c not found: ID does not exist" Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.773807 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-6h8qs"] Jan 23 06:46:58 crc kubenswrapper[4784]: I0123 06:46:58.831036 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-6h8qs"] Jan 23 06:46:59 crc kubenswrapper[4784]: I0123 06:46:59.274298 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c15cbdc1-7ce9-4f95-8125-5d021cac3703" path="/var/lib/kubelet/pods/c15cbdc1-7ce9-4f95-8125-5d021cac3703/volumes" Jan 23 06:46:59 crc kubenswrapper[4784]: I0123 06:46:59.435936 4784 generic.go:334] "Generic (PLEG): container finished" podID="93758036-c735-43ff-9ae8-1dd2f95c7e71" containerID="e860af3f0d45f5d944974d7e993679f89e0bde07ecad65d242b18270fcb996a2" exitCode=0 Jan 23 06:46:59 crc kubenswrapper[4784]: I0123 06:46:59.436065 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgqnc" event={"ID":"93758036-c735-43ff-9ae8-1dd2f95c7e71","Type":"ContainerDied","Data":"e860af3f0d45f5d944974d7e993679f89e0bde07ecad65d242b18270fcb996a2"} Jan 23 06:46:59 crc kubenswrapper[4784]: I0123 06:46:59.436153 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgqnc" event={"ID":"93758036-c735-43ff-9ae8-1dd2f95c7e71","Type":"ContainerDied","Data":"5d721aacd728821415f62a07753bcbefdec5aa2c9f8d2f2845a456ec7c33cc8e"} Jan 23 06:46:59 crc kubenswrapper[4784]: I0123 06:46:59.436169 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d721aacd728821415f62a07753bcbefdec5aa2c9f8d2f2845a456ec7c33cc8e" Jan 23 06:46:59 crc kubenswrapper[4784]: I0123 06:46:59.436922 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:46:59 crc kubenswrapper[4784]: I0123 06:46:59.459033 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93758036-c735-43ff-9ae8-1dd2f95c7e71-catalog-content\") pod \"93758036-c735-43ff-9ae8-1dd2f95c7e71\" (UID: \"93758036-c735-43ff-9ae8-1dd2f95c7e71\") " Jan 23 06:46:59 crc kubenswrapper[4784]: I0123 06:46:59.459159 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93758036-c735-43ff-9ae8-1dd2f95c7e71-utilities\") pod \"93758036-c735-43ff-9ae8-1dd2f95c7e71\" (UID: \"93758036-c735-43ff-9ae8-1dd2f95c7e71\") " Jan 23 06:46:59 crc kubenswrapper[4784]: I0123 06:46:59.459208 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n6mg\" (UniqueName: \"kubernetes.io/projected/93758036-c735-43ff-9ae8-1dd2f95c7e71-kube-api-access-8n6mg\") pod \"93758036-c735-43ff-9ae8-1dd2f95c7e71\" (UID: \"93758036-c735-43ff-9ae8-1dd2f95c7e71\") " Jan 23 06:46:59 crc kubenswrapper[4784]: I0123 06:46:59.460371 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93758036-c735-43ff-9ae8-1dd2f95c7e71-utilities" (OuterVolumeSpecName: "utilities") pod "93758036-c735-43ff-9ae8-1dd2f95c7e71" (UID: "93758036-c735-43ff-9ae8-1dd2f95c7e71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:46:59 crc kubenswrapper[4784]: I0123 06:46:59.467977 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93758036-c735-43ff-9ae8-1dd2f95c7e71-kube-api-access-8n6mg" (OuterVolumeSpecName: "kube-api-access-8n6mg") pod "93758036-c735-43ff-9ae8-1dd2f95c7e71" (UID: "93758036-c735-43ff-9ae8-1dd2f95c7e71"). InnerVolumeSpecName "kube-api-access-8n6mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:46:59 crc kubenswrapper[4784]: I0123 06:46:59.522867 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93758036-c735-43ff-9ae8-1dd2f95c7e71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93758036-c735-43ff-9ae8-1dd2f95c7e71" (UID: "93758036-c735-43ff-9ae8-1dd2f95c7e71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:46:59 crc kubenswrapper[4784]: I0123 06:46:59.565623 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93758036-c735-43ff-9ae8-1dd2f95c7e71-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:59 crc kubenswrapper[4784]: I0123 06:46:59.566077 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93758036-c735-43ff-9ae8-1dd2f95c7e71-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:59 crc kubenswrapper[4784]: I0123 06:46:59.566224 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n6mg\" (UniqueName: \"kubernetes.io/projected/93758036-c735-43ff-9ae8-1dd2f95c7e71-kube-api-access-8n6mg\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:00 crc kubenswrapper[4784]: I0123 06:47:00.448418 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgqnc" Jan 23 06:47:00 crc kubenswrapper[4784]: I0123 06:47:00.489908 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bgqnc"] Jan 23 06:47:00 crc kubenswrapper[4784]: I0123 06:47:00.505028 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bgqnc"] Jan 23 06:47:01 crc kubenswrapper[4784]: I0123 06:47:01.270907 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93758036-c735-43ff-9ae8-1dd2f95c7e71" path="/var/lib/kubelet/pods/93758036-c735-43ff-9ae8-1dd2f95c7e71/volumes" Jan 23 06:47:05 crc kubenswrapper[4784]: I0123 06:47:05.254349 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:47:05 crc kubenswrapper[4784]: E0123 06:47:05.255820 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:47:07 crc kubenswrapper[4784]: I0123 06:47:07.537064 4784 generic.go:334] "Generic (PLEG): container finished" podID="f65404b7-5dd6-409f-87c1-633679f2d5cb" containerID="21c279abd7bced8474d0caea4791c94895a80df1d90f3168e0620b0b5038e55f" exitCode=0 Jan 23 06:47:07 crc kubenswrapper[4784]: I0123 06:47:07.538099 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f65404b7-5dd6-409f-87c1-633679f2d5cb","Type":"ContainerDied","Data":"21c279abd7bced8474d0caea4791c94895a80df1d90f3168e0620b0b5038e55f"} Jan 23 06:47:08 crc kubenswrapper[4784]: I0123 06:47:08.551688 4784 generic.go:334] "Generic (PLEG): container finished" podID="807272ae-7f38-45f1-acd2-984a1a1840d8" containerID="8a1ebed1b53b172c5aa55dc6245acdbd984b9c068024d12ccfb4cdd54b9e68fd" exitCode=0 Jan 23 06:47:08 crc kubenswrapper[4784]: I0123 06:47:08.551792 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"807272ae-7f38-45f1-acd2-984a1a1840d8","Type":"ContainerDied","Data":"8a1ebed1b53b172c5aa55dc6245acdbd984b9c068024d12ccfb4cdd54b9e68fd"} Jan 23 06:47:08 crc kubenswrapper[4784]: I0123 06:47:08.554993 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f65404b7-5dd6-409f-87c1-633679f2d5cb","Type":"ContainerStarted","Data":"cfd655a01aa8f06ab12200fafb1111d026270358269db7b741b9953ea6b52123"} Jan 23 06:47:08 crc kubenswrapper[4784]: I0123 06:47:08.555969 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 06:47:08 crc kubenswrapper[4784]: I0123 06:47:08.622081 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.622048483 podStartE2EDuration="38.622048483s" podCreationTimestamp="2026-01-23 06:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:47:08.614109278 +0000 UTC m=+1631.846617272" watchObservedRunningTime="2026-01-23 06:47:08.622048483 +0000 UTC m=+1631.854556457" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.568338 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"807272ae-7f38-45f1-acd2-984a1a1840d8","Type":"ContainerStarted","Data":"7b590742f4af39d013105505eaf4c33896eabf7378f83591490c359a7d4dcebc"} Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.569026 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.603545 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.603513733 podStartE2EDuration="37.603513733s" podCreationTimestamp="2026-01-23 06:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:47:09.59650266 +0000 UTC m=+1632.829010654" watchObservedRunningTime="2026-01-23 06:47:09.603513733 +0000 UTC m=+1632.836021707" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.703006 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w"] Jan 23 06:47:09 crc kubenswrapper[4784]: E0123 06:47:09.703607 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93758036-c735-43ff-9ae8-1dd2f95c7e71" containerName="registry-server" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.703625 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="93758036-c735-43ff-9ae8-1dd2f95c7e71" containerName="registry-server" Jan 23 06:47:09 crc kubenswrapper[4784]: E0123 06:47:09.703641 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c0eaf9-bfa3-491c-a219-6450089b378e" containerName="init" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.703647 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c0eaf9-bfa3-491c-a219-6450089b378e" containerName="init" Jan 23 06:47:09 crc kubenswrapper[4784]: E0123 06:47:09.703664 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15cbdc1-7ce9-4f95-8125-5d021cac3703" containerName="init" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.703670 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15cbdc1-7ce9-4f95-8125-5d021cac3703" containerName="init" Jan 23 06:47:09 crc kubenswrapper[4784]: E0123 06:47:09.703695 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93758036-c735-43ff-9ae8-1dd2f95c7e71" containerName="extract-utilities" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.703702 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="93758036-c735-43ff-9ae8-1dd2f95c7e71" containerName="extract-utilities" Jan 23 06:47:09 crc kubenswrapper[4784]: E0123 06:47:09.703719 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93758036-c735-43ff-9ae8-1dd2f95c7e71" containerName="extract-content" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.703724 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="93758036-c735-43ff-9ae8-1dd2f95c7e71" containerName="extract-content" Jan 23 06:47:09 crc kubenswrapper[4784]: E0123 06:47:09.703738 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15cbdc1-7ce9-4f95-8125-5d021cac3703" containerName="dnsmasq-dns" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.703758 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15cbdc1-7ce9-4f95-8125-5d021cac3703" containerName="dnsmasq-dns" Jan 23 06:47:09 crc kubenswrapper[4784]: E0123 06:47:09.703773 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c0eaf9-bfa3-491c-a219-6450089b378e" containerName="dnsmasq-dns" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.703779 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c0eaf9-bfa3-491c-a219-6450089b378e" containerName="dnsmasq-dns" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.703990 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15cbdc1-7ce9-4f95-8125-5d021cac3703" containerName="dnsmasq-dns" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.704004 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="93758036-c735-43ff-9ae8-1dd2f95c7e71" containerName="registry-server" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.704025 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6c0eaf9-bfa3-491c-a219-6450089b378e" containerName="dnsmasq-dns" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.704899 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.715908 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.715951 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.716784 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.716798 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.722259 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w"] Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.789429 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcpzd\" (UniqueName: \"kubernetes.io/projected/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-kube-api-access-mcpzd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.789785 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.789931 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.790303 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.892607 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.892743 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.892838 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcpzd\" (UniqueName: \"kubernetes.io/projected/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-kube-api-access-mcpzd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.892889 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.899283 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.905234 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.908022 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:09 crc kubenswrapper[4784]: I0123 06:47:09.920614 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcpzd\" (UniqueName: \"kubernetes.io/projected/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-kube-api-access-mcpzd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:10 crc kubenswrapper[4784]: I0123 06:47:10.030258 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:10 crc kubenswrapper[4784]: W0123 06:47:10.733338 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ecbc8c8_6db3_43c4_8b23_e2f7d72082c4.slice/crio-d33c6f2eb0e9ae7699b83c1667bfc732e97599475994f72a06a88a0ca52b6ec5 WatchSource:0}: Error finding container d33c6f2eb0e9ae7699b83c1667bfc732e97599475994f72a06a88a0ca52b6ec5: Status 404 returned error can't find the container with id d33c6f2eb0e9ae7699b83c1667bfc732e97599475994f72a06a88a0ca52b6ec5 Jan 23 06:47:10 crc kubenswrapper[4784]: I0123 06:47:10.733960 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w"] Jan 23 06:47:11 crc kubenswrapper[4784]: I0123 06:47:11.595050 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" event={"ID":"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4","Type":"ContainerStarted","Data":"d33c6f2eb0e9ae7699b83c1667bfc732e97599475994f72a06a88a0ca52b6ec5"} Jan 23 06:47:18 crc kubenswrapper[4784]: I0123 06:47:18.255456 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:47:18 crc kubenswrapper[4784]: E0123 06:47:18.257435 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:47:20 crc kubenswrapper[4784]: I0123 06:47:20.624602 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f65404b7-5dd6-409f-87c1-633679f2d5cb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.226:5671: connect: connection refused" Jan 23 06:47:22 crc kubenswrapper[4784]: I0123 06:47:22.881626 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="807272ae-7f38-45f1-acd2-984a1a1840d8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.227:5671: connect: connection refused" Jan 23 06:47:23 crc kubenswrapper[4784]: I0123 06:47:23.788966 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" event={"ID":"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4","Type":"ContainerStarted","Data":"6d902bf61a32d02b2ee84583083bde5e887f2768002e1b073a26ab0ba8a0112b"} Jan 23 06:47:23 crc kubenswrapper[4784]: I0123 06:47:23.818310 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" podStartSLOduration=3.152212919 podStartE2EDuration="14.818282251s" podCreationTimestamp="2026-01-23 06:47:09 +0000 UTC" firstStartedPulling="2026-01-23 06:47:10.736440704 +0000 UTC m=+1633.968948678" lastFinishedPulling="2026-01-23 06:47:22.402510036 +0000 UTC m=+1645.635018010" observedRunningTime="2026-01-23 06:47:23.81298593 +0000 UTC m=+1647.045493904" watchObservedRunningTime="2026-01-23 06:47:23.818282251 +0000 UTC m=+1647.050790225" Jan 23 06:47:30 crc kubenswrapper[4784]: I0123 06:47:30.622675 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f65404b7-5dd6-409f-87c1-633679f2d5cb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.226:5671: connect: connection refused" Jan 23 06:47:31 crc kubenswrapper[4784]: I0123 06:47:31.254966 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:47:31 crc kubenswrapper[4784]: E0123 06:47:31.255307 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:47:32 crc kubenswrapper[4784]: I0123 06:47:32.881995 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:47:34 crc kubenswrapper[4784]: I0123 06:47:34.095240 4784 scope.go:117] "RemoveContainer" containerID="f5ebaf4ee0dd3216164b2a5f19c0af5b91c6f17768645a2ebd502373609b5cb7" Jan 23 06:47:34 crc kubenswrapper[4784]: I0123 06:47:34.140607 4784 scope.go:117] "RemoveContainer" containerID="944dacb24b0774ef942394c6c63e230b94461947eb399460b1e0214bab2aa9d5" Jan 23 06:47:39 crc kubenswrapper[4784]: I0123 06:47:39.002327 4784 generic.go:334] "Generic (PLEG): container finished" podID="0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4" containerID="6d902bf61a32d02b2ee84583083bde5e887f2768002e1b073a26ab0ba8a0112b" exitCode=0 Jan 23 06:47:39 crc kubenswrapper[4784]: I0123 06:47:39.002435 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" event={"ID":"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4","Type":"ContainerDied","Data":"6d902bf61a32d02b2ee84583083bde5e887f2768002e1b073a26ab0ba8a0112b"} Jan 23 06:47:40 crc kubenswrapper[4784]: I0123 06:47:40.530405 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:40 crc kubenswrapper[4784]: I0123 06:47:40.623059 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 06:47:40 crc kubenswrapper[4784]: I0123 06:47:40.658101 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcpzd\" (UniqueName: \"kubernetes.io/projected/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-kube-api-access-mcpzd\") pod \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " Jan 23 06:47:40 crc kubenswrapper[4784]: I0123 06:47:40.658164 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-ssh-key-openstack-edpm-ipam\") pod \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " Jan 23 06:47:40 crc kubenswrapper[4784]: I0123 06:47:40.658425 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-inventory\") pod \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " Jan 23 06:47:40 crc kubenswrapper[4784]: I0123 06:47:40.658459 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-repo-setup-combined-ca-bundle\") pod \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\" (UID: \"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4\") " Jan 23 06:47:40 crc kubenswrapper[4784]: I0123 06:47:40.676463 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-kube-api-access-mcpzd" (OuterVolumeSpecName: "kube-api-access-mcpzd") pod "0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4" (UID: "0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4"). InnerVolumeSpecName "kube-api-access-mcpzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:47:40 crc kubenswrapper[4784]: I0123 06:47:40.680837 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4" (UID: "0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:47:40 crc kubenswrapper[4784]: I0123 06:47:40.717423 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4" (UID: "0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:47:40 crc kubenswrapper[4784]: I0123 06:47:40.746157 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-inventory" (OuterVolumeSpecName: "inventory") pod "0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4" (UID: "0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:47:40 crc kubenswrapper[4784]: I0123 06:47:40.762490 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:40 crc kubenswrapper[4784]: I0123 06:47:40.762544 4784 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:40 crc kubenswrapper[4784]: I0123 06:47:40.762565 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcpzd\" (UniqueName: \"kubernetes.io/projected/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-kube-api-access-mcpzd\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:40 crc kubenswrapper[4784]: I0123 06:47:40.762581 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.026464 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" event={"ID":"0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4","Type":"ContainerDied","Data":"d33c6f2eb0e9ae7699b83c1667bfc732e97599475994f72a06a88a0ca52b6ec5"} Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.026522 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33c6f2eb0e9ae7699b83c1667bfc732e97599475994f72a06a88a0ca52b6ec5" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.026596 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.125624 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm"] Jan 23 06:47:41 crc kubenswrapper[4784]: E0123 06:47:41.131186 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.131314 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.131632 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.132711 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.136840 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.136908 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.137479 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.143847 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm"] Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.144139 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.172842 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf4s4\" (UniqueName: \"kubernetes.io/projected/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-kube-api-access-qf4s4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pbcbm\" (UID: \"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.172936 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pbcbm\" (UID: \"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.173130 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pbcbm\" (UID: \"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.275633 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pbcbm\" (UID: \"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.275843 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf4s4\" (UniqueName: \"kubernetes.io/projected/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-kube-api-access-qf4s4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pbcbm\" (UID: \"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.275898 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pbcbm\" (UID: \"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.282494 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pbcbm\" (UID: \"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.289335 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pbcbm\" (UID: \"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.295819 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf4s4\" (UniqueName: \"kubernetes.io/projected/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-kube-api-access-qf4s4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pbcbm\" (UID: \"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" Jan 23 06:47:41 crc kubenswrapper[4784]: I0123 06:47:41.454956 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" Jan 23 06:47:42 crc kubenswrapper[4784]: W0123 06:47:42.100142 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1c57ee2_9b78_4fd3_a5d8_e46caf648c4f.slice/crio-68aaf51e55ee02151668d07670dc033e940b5b00e26fac232b8be841db083962 WatchSource:0}: Error finding container 68aaf51e55ee02151668d07670dc033e940b5b00e26fac232b8be841db083962: Status 404 returned error can't find the container with id 68aaf51e55ee02151668d07670dc033e940b5b00e26fac232b8be841db083962 Jan 23 06:47:42 crc kubenswrapper[4784]: I0123 06:47:42.100982 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm"] Jan 23 06:47:42 crc kubenswrapper[4784]: I0123 06:47:42.105534 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 06:47:43 crc kubenswrapper[4784]: I0123 06:47:43.049717 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" event={"ID":"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f","Type":"ContainerStarted","Data":"cd81d109ef388fe416480ab8be09499e02bb9142aa439cc41c605df3ac0b86cf"} Jan 23 06:47:43 crc kubenswrapper[4784]: I0123 06:47:43.050166 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" event={"ID":"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f","Type":"ContainerStarted","Data":"68aaf51e55ee02151668d07670dc033e940b5b00e26fac232b8be841db083962"} Jan 23 06:47:43 crc kubenswrapper[4784]: I0123 06:47:43.076081 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" podStartSLOduration=1.596835225 podStartE2EDuration="2.076052626s" podCreationTimestamp="2026-01-23 06:47:41 +0000 UTC" firstStartedPulling="2026-01-23 06:47:42.105160947 +0000 UTC m=+1665.337668921" lastFinishedPulling="2026-01-23 06:47:42.584378348 +0000 UTC m=+1665.816886322" observedRunningTime="2026-01-23 06:47:43.067818483 +0000 UTC m=+1666.300326467" watchObservedRunningTime="2026-01-23 06:47:43.076052626 +0000 UTC m=+1666.308560600" Jan 23 06:47:46 crc kubenswrapper[4784]: I0123 06:47:46.084844 4784 generic.go:334] "Generic (PLEG): container finished" podID="b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f" containerID="cd81d109ef388fe416480ab8be09499e02bb9142aa439cc41c605df3ac0b86cf" exitCode=0 Jan 23 06:47:46 crc kubenswrapper[4784]: I0123 06:47:46.084922 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" event={"ID":"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f","Type":"ContainerDied","Data":"cd81d109ef388fe416480ab8be09499e02bb9142aa439cc41c605df3ac0b86cf"} Jan 23 06:47:46 crc kubenswrapper[4784]: I0123 06:47:46.254955 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:47:46 crc kubenswrapper[4784]: E0123 06:47:46.255974 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:47:47 crc kubenswrapper[4784]: I0123 06:47:47.599718 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" Jan 23 06:47:47 crc kubenswrapper[4784]: I0123 06:47:47.666538 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf4s4\" (UniqueName: \"kubernetes.io/projected/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-kube-api-access-qf4s4\") pod \"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f\" (UID: \"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f\") " Jan 23 06:47:47 crc kubenswrapper[4784]: I0123 06:47:47.666763 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-ssh-key-openstack-edpm-ipam\") pod \"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f\" (UID: \"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f\") " Jan 23 06:47:47 crc kubenswrapper[4784]: I0123 06:47:47.666961 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-inventory\") pod \"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f\" (UID: \"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f\") " Jan 23 06:47:47 crc kubenswrapper[4784]: I0123 06:47:47.675060 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-kube-api-access-qf4s4" (OuterVolumeSpecName: "kube-api-access-qf4s4") pod "b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f" (UID: "b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f"). InnerVolumeSpecName "kube-api-access-qf4s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:47:47 crc kubenswrapper[4784]: I0123 06:47:47.698467 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f" (UID: "b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:47:47 crc kubenswrapper[4784]: I0123 06:47:47.700070 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-inventory" (OuterVolumeSpecName: "inventory") pod "b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f" (UID: "b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:47:47 crc kubenswrapper[4784]: I0123 06:47:47.769739 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:47 crc kubenswrapper[4784]: I0123 06:47:47.770410 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf4s4\" (UniqueName: \"kubernetes.io/projected/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-kube-api-access-qf4s4\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:47 crc kubenswrapper[4784]: I0123 06:47:47.770520 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.113923 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" event={"ID":"b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f","Type":"ContainerDied","Data":"68aaf51e55ee02151668d07670dc033e940b5b00e26fac232b8be841db083962"} Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.113993 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68aaf51e55ee02151668d07670dc033e940b5b00e26fac232b8be841db083962" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.114404 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pbcbm" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.207956 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz"] Jan 23 06:47:48 crc kubenswrapper[4784]: E0123 06:47:48.208711 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.208767 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.209122 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.210223 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.213609 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.213662 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.214316 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.215974 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.223364 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz"] Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.283393 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.283523 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.283607 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.283691 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jgzr\" (UniqueName: \"kubernetes.io/projected/64311990-e01e-4553-89da-a3c7bb54b63c-kube-api-access-5jgzr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.386670 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.386899 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jgzr\" (UniqueName: \"kubernetes.io/projected/64311990-e01e-4553-89da-a3c7bb54b63c-kube-api-access-5jgzr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.387070 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.387202 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.393393 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.396351 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.397792 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.408303 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jgzr\" (UniqueName: \"kubernetes.io/projected/64311990-e01e-4553-89da-a3c7bb54b63c-kube-api-access-5jgzr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:47:48 crc kubenswrapper[4784]: I0123 06:47:48.528583 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:47:49 crc kubenswrapper[4784]: I0123 06:47:49.106625 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz"] Jan 23 06:47:49 crc kubenswrapper[4784]: W0123 06:47:49.109336 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64311990_e01e_4553_89da_a3c7bb54b63c.slice/crio-f55ee62717e6661e16d57eaeaaf7c57ce70e141d409f460cb8c9e7143a05788f WatchSource:0}: Error finding container f55ee62717e6661e16d57eaeaaf7c57ce70e141d409f460cb8c9e7143a05788f: Status 404 returned error can't find the container with id f55ee62717e6661e16d57eaeaaf7c57ce70e141d409f460cb8c9e7143a05788f Jan 23 06:47:49 crc kubenswrapper[4784]: I0123 06:47:49.138709 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" event={"ID":"64311990-e01e-4553-89da-a3c7bb54b63c","Type":"ContainerStarted","Data":"f55ee62717e6661e16d57eaeaaf7c57ce70e141d409f460cb8c9e7143a05788f"} Jan 23 06:47:50 crc kubenswrapper[4784]: I0123 06:47:50.151798 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" event={"ID":"64311990-e01e-4553-89da-a3c7bb54b63c","Type":"ContainerStarted","Data":"3ae542121765ebe0dd7be2f2f6eb3a4d4b46d7d82fd19c7740b66f980d5c5d34"} Jan 23 06:47:50 crc kubenswrapper[4784]: I0123 06:47:50.178097 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" podStartSLOduration=1.4697946100000001 podStartE2EDuration="2.178051562s" podCreationTimestamp="2026-01-23 06:47:48 +0000 UTC" firstStartedPulling="2026-01-23 06:47:49.113339452 +0000 UTC m=+1672.345847426" lastFinishedPulling="2026-01-23 06:47:49.821596404 +0000 UTC m=+1673.054104378" observedRunningTime="2026-01-23 06:47:50.169091852 +0000 UTC m=+1673.401599856" watchObservedRunningTime="2026-01-23 06:47:50.178051562 +0000 UTC m=+1673.410559576" Jan 23 06:48:01 crc kubenswrapper[4784]: I0123 06:48:01.254471 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:48:01 crc kubenswrapper[4784]: E0123 06:48:01.255651 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:48:12 crc kubenswrapper[4784]: I0123 06:48:12.254455 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:48:12 crc kubenswrapper[4784]: E0123 06:48:12.255870 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:48:25 crc kubenswrapper[4784]: I0123 06:48:25.254806 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:48:25 crc kubenswrapper[4784]: E0123 06:48:25.255812 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:48:34 crc kubenswrapper[4784]: I0123 06:48:34.325605 4784 scope.go:117] "RemoveContainer" containerID="484ad734e3569d976c619f2d62e3ec503464dbed1e626752d09c8197e0a2e812" Jan 23 06:48:37 crc kubenswrapper[4784]: I0123 06:48:37.268019 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:48:37 crc kubenswrapper[4784]: E0123 06:48:37.269504 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:48:51 crc kubenswrapper[4784]: I0123 06:48:51.254613 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:48:51 crc kubenswrapper[4784]: E0123 06:48:51.256174 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:49:04 crc kubenswrapper[4784]: I0123 06:49:04.254146 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:49:04 crc kubenswrapper[4784]: E0123 06:49:04.255314 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:49:18 crc kubenswrapper[4784]: I0123 06:49:18.256011 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:49:18 crc kubenswrapper[4784]: E0123 06:49:18.259677 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:49:31 crc kubenswrapper[4784]: I0123 06:49:31.254667 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:49:31 crc kubenswrapper[4784]: E0123 06:49:31.255898 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:49:34 crc kubenswrapper[4784]: I0123 06:49:34.439226 4784 scope.go:117] "RemoveContainer" containerID="7fe6192d7ae7aa3ee8930b98adda10683c494470230651883cdeb1e9e5d3cd4a" Jan 23 06:49:42 crc kubenswrapper[4784]: I0123 06:49:42.253984 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:49:42 crc kubenswrapper[4784]: E0123 06:49:42.255093 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:49:55 crc kubenswrapper[4784]: I0123 06:49:55.255074 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:49:55 crc kubenswrapper[4784]: E0123 06:49:55.256212 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:50:07 crc kubenswrapper[4784]: I0123 06:50:07.271830 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:50:07 crc kubenswrapper[4784]: E0123 06:50:07.273515 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:50:22 crc kubenswrapper[4784]: I0123 06:50:22.254413 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:50:22 crc kubenswrapper[4784]: E0123 06:50:22.255550 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:50:33 crc kubenswrapper[4784]: I0123 06:50:33.254135 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:50:33 crc kubenswrapper[4784]: E0123 06:50:33.255486 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:50:44 crc kubenswrapper[4784]: I0123 06:50:44.073644 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e86c-account-create-update-6rl54"] Jan 23 06:50:44 crc kubenswrapper[4784]: I0123 06:50:44.088032 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e86c-account-create-update-6rl54"] Jan 23 06:50:45 crc kubenswrapper[4784]: I0123 06:50:45.062006 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-8fqbr"] Jan 23 06:50:45 crc kubenswrapper[4784]: I0123 06:50:45.083373 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-zbpdg"] Jan 23 06:50:45 crc kubenswrapper[4784]: I0123 06:50:45.095036 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-44aa-account-create-update-tc767"] Jan 23 06:50:45 crc kubenswrapper[4784]: I0123 06:50:45.106859 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-zbpdg"] Jan 23 06:50:45 crc kubenswrapper[4784]: I0123 06:50:45.120017 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-44aa-account-create-update-tc767"] Jan 23 06:50:45 crc kubenswrapper[4784]: I0123 06:50:45.134504 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-8fqbr"] Jan 23 06:50:45 crc kubenswrapper[4784]: I0123 06:50:45.266428 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a02b591-5a08-4a50-a248-9d6fb8c9e13e" path="/var/lib/kubelet/pods/0a02b591-5a08-4a50-a248-9d6fb8c9e13e/volumes" Jan 23 06:50:45 crc kubenswrapper[4784]: I0123 06:50:45.267681 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="398711da-15cd-410f-8a7f-8ba41455e438" path="/var/lib/kubelet/pods/398711da-15cd-410f-8a7f-8ba41455e438/volumes" Jan 23 06:50:45 crc kubenswrapper[4784]: I0123 06:50:45.268415 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456b7f3f-ca26-4bf9-944f-fb93921474fd" path="/var/lib/kubelet/pods/456b7f3f-ca26-4bf9-944f-fb93921474fd/volumes" Jan 23 06:50:45 crc kubenswrapper[4784]: I0123 06:50:45.269136 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f31b47-1781-4d5a-b7ee-13ec522694d8" path="/var/lib/kubelet/pods/56f31b47-1781-4d5a-b7ee-13ec522694d8/volumes" Jan 23 06:50:46 crc kubenswrapper[4784]: I0123 06:50:46.042496 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-d401-account-create-update-vp8qw"] Jan 23 06:50:46 crc kubenswrapper[4784]: I0123 06:50:46.054897 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-c6jcv"] Jan 23 06:50:46 crc kubenswrapper[4784]: I0123 06:50:46.068145 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-06bd-account-create-update-pn6qb"] Jan 23 06:50:46 crc kubenswrapper[4784]: I0123 06:50:46.081541 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-d401-account-create-update-vp8qw"] Jan 23 06:50:46 crc kubenswrapper[4784]: I0123 06:50:46.091679 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-06bd-account-create-update-pn6qb"] Jan 23 06:50:46 crc kubenswrapper[4784]: I0123 06:50:46.101084 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-c6jcv"] Jan 23 06:50:46 crc kubenswrapper[4784]: I0123 06:50:46.110935 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-dqfph"] Jan 23 06:50:46 crc kubenswrapper[4784]: I0123 06:50:46.122136 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-dqfph"] Jan 23 06:50:46 crc kubenswrapper[4784]: I0123 06:50:46.255288 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:50:46 crc kubenswrapper[4784]: E0123 06:50:46.255657 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:50:47 crc kubenswrapper[4784]: I0123 06:50:47.275065 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2b3705-b4ae-41bc-961c-b249f979ce40" path="/var/lib/kubelet/pods/ab2b3705-b4ae-41bc-961c-b249f979ce40/volumes" Jan 23 06:50:47 crc kubenswrapper[4784]: I0123 06:50:47.276520 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3922ff9-5f68-4ef0-8a15-d0b4b566e78b" path="/var/lib/kubelet/pods/d3922ff9-5f68-4ef0-8a15-d0b4b566e78b/volumes" Jan 23 06:50:47 crc kubenswrapper[4784]: I0123 06:50:47.277459 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f878f255-96b1-4ac5-89ab-6890e1ada898" path="/var/lib/kubelet/pods/f878f255-96b1-4ac5-89ab-6890e1ada898/volumes" Jan 23 06:50:47 crc kubenswrapper[4784]: I0123 06:50:47.278735 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8f92e52-4089-4f9a-90bc-a606d37b058d" path="/var/lib/kubelet/pods/f8f92e52-4089-4f9a-90bc-a606d37b058d/volumes" Jan 23 06:50:50 crc kubenswrapper[4784]: I0123 06:50:50.768066 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-856bb5496c-5hkpt" podUID="bfac942c-ab7e-42a0-8091-29079fd4da0e" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 23 06:51:01 crc kubenswrapper[4784]: I0123 06:51:01.254999 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:51:01 crc kubenswrapper[4784]: E0123 06:51:01.255973 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:51:06 crc kubenswrapper[4784]: I0123 06:51:06.952159 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vl9fj"] Jan 23 06:51:06 crc kubenswrapper[4784]: I0123 06:51:06.956160 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:06 crc kubenswrapper[4784]: I0123 06:51:06.963415 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vl9fj"] Jan 23 06:51:07 crc kubenswrapper[4784]: I0123 06:51:07.044212 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be031115-f6d6-44ed-9e60-a846908804bc-utilities\") pod \"redhat-marketplace-vl9fj\" (UID: \"be031115-f6d6-44ed-9e60-a846908804bc\") " pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:07 crc kubenswrapper[4784]: I0123 06:51:07.044289 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be031115-f6d6-44ed-9e60-a846908804bc-catalog-content\") pod \"redhat-marketplace-vl9fj\" (UID: \"be031115-f6d6-44ed-9e60-a846908804bc\") " pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:07 crc kubenswrapper[4784]: I0123 06:51:07.044501 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8skfb\" (UniqueName: \"kubernetes.io/projected/be031115-f6d6-44ed-9e60-a846908804bc-kube-api-access-8skfb\") pod \"redhat-marketplace-vl9fj\" (UID: \"be031115-f6d6-44ed-9e60-a846908804bc\") " pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:07 crc kubenswrapper[4784]: I0123 06:51:07.048157 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tvgsx"] Jan 23 06:51:07 crc kubenswrapper[4784]: I0123 06:51:07.059813 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tvgsx"] Jan 23 06:51:07 crc kubenswrapper[4784]: I0123 06:51:07.146831 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be031115-f6d6-44ed-9e60-a846908804bc-utilities\") pod \"redhat-marketplace-vl9fj\" (UID: \"be031115-f6d6-44ed-9e60-a846908804bc\") " pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:07 crc kubenswrapper[4784]: I0123 06:51:07.146891 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be031115-f6d6-44ed-9e60-a846908804bc-catalog-content\") pod \"redhat-marketplace-vl9fj\" (UID: \"be031115-f6d6-44ed-9e60-a846908804bc\") " pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:07 crc kubenswrapper[4784]: I0123 06:51:07.146975 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8skfb\" (UniqueName: \"kubernetes.io/projected/be031115-f6d6-44ed-9e60-a846908804bc-kube-api-access-8skfb\") pod \"redhat-marketplace-vl9fj\" (UID: \"be031115-f6d6-44ed-9e60-a846908804bc\") " pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:07 crc kubenswrapper[4784]: I0123 06:51:07.147561 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be031115-f6d6-44ed-9e60-a846908804bc-utilities\") pod \"redhat-marketplace-vl9fj\" (UID: \"be031115-f6d6-44ed-9e60-a846908804bc\") " pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:07 crc kubenswrapper[4784]: I0123 06:51:07.147692 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be031115-f6d6-44ed-9e60-a846908804bc-catalog-content\") pod \"redhat-marketplace-vl9fj\" (UID: \"be031115-f6d6-44ed-9e60-a846908804bc\") " pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:07 crc kubenswrapper[4784]: I0123 06:51:07.176859 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8skfb\" (UniqueName: \"kubernetes.io/projected/be031115-f6d6-44ed-9e60-a846908804bc-kube-api-access-8skfb\") pod \"redhat-marketplace-vl9fj\" (UID: \"be031115-f6d6-44ed-9e60-a846908804bc\") " pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:07 crc kubenswrapper[4784]: I0123 06:51:07.267867 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86a35ddd-0a33-4ef4-86d1-11c1279b23d7" path="/var/lib/kubelet/pods/86a35ddd-0a33-4ef4-86d1-11c1279b23d7/volumes" Jan 23 06:51:07 crc kubenswrapper[4784]: I0123 06:51:07.293912 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:07 crc kubenswrapper[4784]: I0123 06:51:07.797730 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vl9fj"] Jan 23 06:51:08 crc kubenswrapper[4784]: I0123 06:51:08.567127 4784 generic.go:334] "Generic (PLEG): container finished" podID="be031115-f6d6-44ed-9e60-a846908804bc" containerID="5cf4752a37e41dcf421f1f166bc3c68df2b6f735ae243c923ecdb9b61b45fc56" exitCode=0 Jan 23 06:51:08 crc kubenswrapper[4784]: I0123 06:51:08.567217 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl9fj" event={"ID":"be031115-f6d6-44ed-9e60-a846908804bc","Type":"ContainerDied","Data":"5cf4752a37e41dcf421f1f166bc3c68df2b6f735ae243c923ecdb9b61b45fc56"} Jan 23 06:51:08 crc kubenswrapper[4784]: I0123 06:51:08.567569 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl9fj" event={"ID":"be031115-f6d6-44ed-9e60-a846908804bc","Type":"ContainerStarted","Data":"0d8e14b154f47ff4d3cafdf2622c1a81e1a5f6b2dc8f5001fc22d62b290345dd"} Jan 23 06:51:09 crc kubenswrapper[4784]: I0123 06:51:09.578731 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl9fj" event={"ID":"be031115-f6d6-44ed-9e60-a846908804bc","Type":"ContainerStarted","Data":"b1992c5c1886f4d2bde1fadab9efeb0b5678de70fef56683246a1104ed5c6d4b"} Jan 23 06:51:10 crc kubenswrapper[4784]: I0123 06:51:10.596278 4784 generic.go:334] "Generic (PLEG): container finished" podID="be031115-f6d6-44ed-9e60-a846908804bc" containerID="b1992c5c1886f4d2bde1fadab9efeb0b5678de70fef56683246a1104ed5c6d4b" exitCode=0 Jan 23 06:51:10 crc kubenswrapper[4784]: I0123 06:51:10.596386 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl9fj" event={"ID":"be031115-f6d6-44ed-9e60-a846908804bc","Type":"ContainerDied","Data":"b1992c5c1886f4d2bde1fadab9efeb0b5678de70fef56683246a1104ed5c6d4b"} Jan 23 06:51:11 crc kubenswrapper[4784]: I0123 06:51:11.615319 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl9fj" event={"ID":"be031115-f6d6-44ed-9e60-a846908804bc","Type":"ContainerStarted","Data":"8a965e1fe5b1ae20506be020542e02e2c9e72fe34a8c83727ede0a02acd52b14"} Jan 23 06:51:11 crc kubenswrapper[4784]: I0123 06:51:11.657876 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vl9fj" podStartSLOduration=3.038989442 podStartE2EDuration="5.657840235s" podCreationTimestamp="2026-01-23 06:51:06 +0000 UTC" firstStartedPulling="2026-01-23 06:51:08.573073798 +0000 UTC m=+1871.805581792" lastFinishedPulling="2026-01-23 06:51:11.191924611 +0000 UTC m=+1874.424432585" observedRunningTime="2026-01-23 06:51:11.639590396 +0000 UTC m=+1874.872098380" watchObservedRunningTime="2026-01-23 06:51:11.657840235 +0000 UTC m=+1874.890348229" Jan 23 06:51:14 crc kubenswrapper[4784]: I0123 06:51:14.254325 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:51:14 crc kubenswrapper[4784]: E0123 06:51:14.255245 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:51:16 crc kubenswrapper[4784]: I0123 06:51:16.063825 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-2g628"] Jan 23 06:51:16 crc kubenswrapper[4784]: I0123 06:51:16.074109 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-2g628"] Jan 23 06:51:16 crc kubenswrapper[4784]: I0123 06:51:16.085564 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-r2prw"] Jan 23 06:51:16 crc kubenswrapper[4784]: I0123 06:51:16.098098 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-f5snt"] Jan 23 06:51:16 crc kubenswrapper[4784]: I0123 06:51:16.110664 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-f5snt"] Jan 23 06:51:16 crc kubenswrapper[4784]: I0123 06:51:16.120561 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-48fe-account-create-update-cvjrf"] Jan 23 06:51:16 crc kubenswrapper[4784]: I0123 06:51:16.132217 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-r2prw"] Jan 23 06:51:16 crc kubenswrapper[4784]: I0123 06:51:16.145536 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a197-account-create-update-q6j44"] Jan 23 06:51:16 crc kubenswrapper[4784]: I0123 06:51:16.155034 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-48fe-account-create-update-cvjrf"] Jan 23 06:51:16 crc kubenswrapper[4784]: I0123 06:51:16.164679 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a197-account-create-update-q6j44"] Jan 23 06:51:17 crc kubenswrapper[4784]: I0123 06:51:17.037675 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c72e-account-create-update-flvx5"] Jan 23 06:51:17 crc kubenswrapper[4784]: I0123 06:51:17.049590 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c72e-account-create-update-flvx5"] Jan 23 06:51:17 crc kubenswrapper[4784]: I0123 06:51:17.266860 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b44b992-71dd-4aa8-aad0-9b323d47e8fb" path="/var/lib/kubelet/pods/0b44b992-71dd-4aa8-aad0-9b323d47e8fb/volumes" Jan 23 06:51:17 crc kubenswrapper[4784]: I0123 06:51:17.269771 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1df9e961-9c7f-49bc-aae3-018a4850e116" path="/var/lib/kubelet/pods/1df9e961-9c7f-49bc-aae3-018a4850e116/volumes" Jan 23 06:51:17 crc kubenswrapper[4784]: I0123 06:51:17.270632 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d6bfb06-c97c-4f8d-8da9-ba12f6640bad" path="/var/lib/kubelet/pods/8d6bfb06-c97c-4f8d-8da9-ba12f6640bad/volumes" Jan 23 06:51:17 crc kubenswrapper[4784]: I0123 06:51:17.271418 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5a0bd14-68e7-4973-ad97-42f2238300f5" path="/var/lib/kubelet/pods/a5a0bd14-68e7-4973-ad97-42f2238300f5/volumes" Jan 23 06:51:17 crc kubenswrapper[4784]: I0123 06:51:17.272901 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b439ced3-cccc-44d7-b249-a37d3505df26" path="/var/lib/kubelet/pods/b439ced3-cccc-44d7-b249-a37d3505df26/volumes" Jan 23 06:51:17 crc kubenswrapper[4784]: I0123 06:51:17.273793 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba896b3e-c197-41d5-b182-f17f508d32b7" path="/var/lib/kubelet/pods/ba896b3e-c197-41d5-b182-f17f508d32b7/volumes" Jan 23 06:51:17 crc kubenswrapper[4784]: I0123 06:51:17.294602 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:17 crc kubenswrapper[4784]: I0123 06:51:17.294663 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:17 crc kubenswrapper[4784]: I0123 06:51:17.360793 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:17 crc kubenswrapper[4784]: I0123 06:51:17.732199 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:17 crc kubenswrapper[4784]: I0123 06:51:17.800431 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vl9fj"] Jan 23 06:51:19 crc kubenswrapper[4784]: I0123 06:51:19.700731 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vl9fj" podUID="be031115-f6d6-44ed-9e60-a846908804bc" containerName="registry-server" containerID="cri-o://8a965e1fe5b1ae20506be020542e02e2c9e72fe34a8c83727ede0a02acd52b14" gracePeriod=2 Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.188886 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.284648 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8skfb\" (UniqueName: \"kubernetes.io/projected/be031115-f6d6-44ed-9e60-a846908804bc-kube-api-access-8skfb\") pod \"be031115-f6d6-44ed-9e60-a846908804bc\" (UID: \"be031115-f6d6-44ed-9e60-a846908804bc\") " Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.284767 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be031115-f6d6-44ed-9e60-a846908804bc-catalog-content\") pod \"be031115-f6d6-44ed-9e60-a846908804bc\" (UID: \"be031115-f6d6-44ed-9e60-a846908804bc\") " Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.284835 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be031115-f6d6-44ed-9e60-a846908804bc-utilities\") pod \"be031115-f6d6-44ed-9e60-a846908804bc\" (UID: \"be031115-f6d6-44ed-9e60-a846908804bc\") " Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.287542 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be031115-f6d6-44ed-9e60-a846908804bc-utilities" (OuterVolumeSpecName: "utilities") pod "be031115-f6d6-44ed-9e60-a846908804bc" (UID: "be031115-f6d6-44ed-9e60-a846908804bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.294868 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be031115-f6d6-44ed-9e60-a846908804bc-kube-api-access-8skfb" (OuterVolumeSpecName: "kube-api-access-8skfb") pod "be031115-f6d6-44ed-9e60-a846908804bc" (UID: "be031115-f6d6-44ed-9e60-a846908804bc"). InnerVolumeSpecName "kube-api-access-8skfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.312412 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be031115-f6d6-44ed-9e60-a846908804bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be031115-f6d6-44ed-9e60-a846908804bc" (UID: "be031115-f6d6-44ed-9e60-a846908804bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.388073 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8skfb\" (UniqueName: \"kubernetes.io/projected/be031115-f6d6-44ed-9e60-a846908804bc-kube-api-access-8skfb\") on node \"crc\" DevicePath \"\"" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.388122 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be031115-f6d6-44ed-9e60-a846908804bc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.388132 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be031115-f6d6-44ed-9e60-a846908804bc-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.716483 4784 generic.go:334] "Generic (PLEG): container finished" podID="be031115-f6d6-44ed-9e60-a846908804bc" containerID="8a965e1fe5b1ae20506be020542e02e2c9e72fe34a8c83727ede0a02acd52b14" exitCode=0 Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.716546 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl9fj" event={"ID":"be031115-f6d6-44ed-9e60-a846908804bc","Type":"ContainerDied","Data":"8a965e1fe5b1ae20506be020542e02e2c9e72fe34a8c83727ede0a02acd52b14"} Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.716575 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vl9fj" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.716601 4784 scope.go:117] "RemoveContainer" containerID="8a965e1fe5b1ae20506be020542e02e2c9e72fe34a8c83727ede0a02acd52b14" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.716587 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vl9fj" event={"ID":"be031115-f6d6-44ed-9e60-a846908804bc","Type":"ContainerDied","Data":"0d8e14b154f47ff4d3cafdf2622c1a81e1a5f6b2dc8f5001fc22d62b290345dd"} Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.750482 4784 scope.go:117] "RemoveContainer" containerID="b1992c5c1886f4d2bde1fadab9efeb0b5678de70fef56683246a1104ed5c6d4b" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.763733 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vl9fj"] Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.779623 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vl9fj"] Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.795593 4784 scope.go:117] "RemoveContainer" containerID="5cf4752a37e41dcf421f1f166bc3c68df2b6f735ae243c923ecdb9b61b45fc56" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.834300 4784 scope.go:117] "RemoveContainer" containerID="8a965e1fe5b1ae20506be020542e02e2c9e72fe34a8c83727ede0a02acd52b14" Jan 23 06:51:20 crc kubenswrapper[4784]: E0123 06:51:20.835405 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a965e1fe5b1ae20506be020542e02e2c9e72fe34a8c83727ede0a02acd52b14\": container with ID starting with 8a965e1fe5b1ae20506be020542e02e2c9e72fe34a8c83727ede0a02acd52b14 not found: ID does not exist" containerID="8a965e1fe5b1ae20506be020542e02e2c9e72fe34a8c83727ede0a02acd52b14" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.835597 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a965e1fe5b1ae20506be020542e02e2c9e72fe34a8c83727ede0a02acd52b14"} err="failed to get container status \"8a965e1fe5b1ae20506be020542e02e2c9e72fe34a8c83727ede0a02acd52b14\": rpc error: code = NotFound desc = could not find container \"8a965e1fe5b1ae20506be020542e02e2c9e72fe34a8c83727ede0a02acd52b14\": container with ID starting with 8a965e1fe5b1ae20506be020542e02e2c9e72fe34a8c83727ede0a02acd52b14 not found: ID does not exist" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.835720 4784 scope.go:117] "RemoveContainer" containerID="b1992c5c1886f4d2bde1fadab9efeb0b5678de70fef56683246a1104ed5c6d4b" Jan 23 06:51:20 crc kubenswrapper[4784]: E0123 06:51:20.837293 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1992c5c1886f4d2bde1fadab9efeb0b5678de70fef56683246a1104ed5c6d4b\": container with ID starting with b1992c5c1886f4d2bde1fadab9efeb0b5678de70fef56683246a1104ed5c6d4b not found: ID does not exist" containerID="b1992c5c1886f4d2bde1fadab9efeb0b5678de70fef56683246a1104ed5c6d4b" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.837439 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1992c5c1886f4d2bde1fadab9efeb0b5678de70fef56683246a1104ed5c6d4b"} err="failed to get container status \"b1992c5c1886f4d2bde1fadab9efeb0b5678de70fef56683246a1104ed5c6d4b\": rpc error: code = NotFound desc = could not find container \"b1992c5c1886f4d2bde1fadab9efeb0b5678de70fef56683246a1104ed5c6d4b\": container with ID starting with b1992c5c1886f4d2bde1fadab9efeb0b5678de70fef56683246a1104ed5c6d4b not found: ID does not exist" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.837535 4784 scope.go:117] "RemoveContainer" containerID="5cf4752a37e41dcf421f1f166bc3c68df2b6f735ae243c923ecdb9b61b45fc56" Jan 23 06:51:20 crc kubenswrapper[4784]: E0123 06:51:20.837958 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cf4752a37e41dcf421f1f166bc3c68df2b6f735ae243c923ecdb9b61b45fc56\": container with ID starting with 5cf4752a37e41dcf421f1f166bc3c68df2b6f735ae243c923ecdb9b61b45fc56 not found: ID does not exist" containerID="5cf4752a37e41dcf421f1f166bc3c68df2b6f735ae243c923ecdb9b61b45fc56" Jan 23 06:51:20 crc kubenswrapper[4784]: I0123 06:51:20.838013 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf4752a37e41dcf421f1f166bc3c68df2b6f735ae243c923ecdb9b61b45fc56"} err="failed to get container status \"5cf4752a37e41dcf421f1f166bc3c68df2b6f735ae243c923ecdb9b61b45fc56\": rpc error: code = NotFound desc = could not find container \"5cf4752a37e41dcf421f1f166bc3c68df2b6f735ae243c923ecdb9b61b45fc56\": container with ID starting with 5cf4752a37e41dcf421f1f166bc3c68df2b6f735ae243c923ecdb9b61b45fc56 not found: ID does not exist" Jan 23 06:51:21 crc kubenswrapper[4784]: I0123 06:51:21.267790 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be031115-f6d6-44ed-9e60-a846908804bc" path="/var/lib/kubelet/pods/be031115-f6d6-44ed-9e60-a846908804bc/volumes" Jan 23 06:51:23 crc kubenswrapper[4784]: I0123 06:51:23.758610 4784 generic.go:334] "Generic (PLEG): container finished" podID="64311990-e01e-4553-89da-a3c7bb54b63c" containerID="3ae542121765ebe0dd7be2f2f6eb3a4d4b46d7d82fd19c7740b66f980d5c5d34" exitCode=0 Jan 23 06:51:23 crc kubenswrapper[4784]: I0123 06:51:23.758708 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" event={"ID":"64311990-e01e-4553-89da-a3c7bb54b63c","Type":"ContainerDied","Data":"3ae542121765ebe0dd7be2f2f6eb3a4d4b46d7d82fd19c7740b66f980d5c5d34"} Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.474105 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.567441 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-inventory\") pod \"64311990-e01e-4553-89da-a3c7bb54b63c\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.567537 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-ssh-key-openstack-edpm-ipam\") pod \"64311990-e01e-4553-89da-a3c7bb54b63c\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.567590 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-bootstrap-combined-ca-bundle\") pod \"64311990-e01e-4553-89da-a3c7bb54b63c\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.567655 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jgzr\" (UniqueName: \"kubernetes.io/projected/64311990-e01e-4553-89da-a3c7bb54b63c-kube-api-access-5jgzr\") pod \"64311990-e01e-4553-89da-a3c7bb54b63c\" (UID: \"64311990-e01e-4553-89da-a3c7bb54b63c\") " Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.580022 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64311990-e01e-4553-89da-a3c7bb54b63c-kube-api-access-5jgzr" (OuterVolumeSpecName: "kube-api-access-5jgzr") pod "64311990-e01e-4553-89da-a3c7bb54b63c" (UID: "64311990-e01e-4553-89da-a3c7bb54b63c"). InnerVolumeSpecName "kube-api-access-5jgzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.593273 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "64311990-e01e-4553-89da-a3c7bb54b63c" (UID: "64311990-e01e-4553-89da-a3c7bb54b63c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.609296 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-inventory" (OuterVolumeSpecName: "inventory") pod "64311990-e01e-4553-89da-a3c7bb54b63c" (UID: "64311990-e01e-4553-89da-a3c7bb54b63c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.615655 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "64311990-e01e-4553-89da-a3c7bb54b63c" (UID: "64311990-e01e-4553-89da-a3c7bb54b63c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.671314 4784 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.671395 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jgzr\" (UniqueName: \"kubernetes.io/projected/64311990-e01e-4553-89da-a3c7bb54b63c-kube-api-access-5jgzr\") on node \"crc\" DevicePath \"\"" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.671407 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.671417 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64311990-e01e-4553-89da-a3c7bb54b63c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.785866 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" event={"ID":"64311990-e01e-4553-89da-a3c7bb54b63c","Type":"ContainerDied","Data":"f55ee62717e6661e16d57eaeaaf7c57ce70e141d409f460cb8c9e7143a05788f"} Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.785937 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f55ee62717e6661e16d57eaeaaf7c57ce70e141d409f460cb8c9e7143a05788f" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.785952 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.924617 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh"] Jan 23 06:51:25 crc kubenswrapper[4784]: E0123 06:51:25.925302 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be031115-f6d6-44ed-9e60-a846908804bc" containerName="registry-server" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.925342 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="be031115-f6d6-44ed-9e60-a846908804bc" containerName="registry-server" Jan 23 06:51:25 crc kubenswrapper[4784]: E0123 06:51:25.925368 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be031115-f6d6-44ed-9e60-a846908804bc" containerName="extract-content" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.925375 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="be031115-f6d6-44ed-9e60-a846908804bc" containerName="extract-content" Jan 23 06:51:25 crc kubenswrapper[4784]: E0123 06:51:25.925406 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64311990-e01e-4553-89da-a3c7bb54b63c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.925415 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="64311990-e01e-4553-89da-a3c7bb54b63c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 06:51:25 crc kubenswrapper[4784]: E0123 06:51:25.925439 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be031115-f6d6-44ed-9e60-a846908804bc" containerName="extract-utilities" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.925446 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="be031115-f6d6-44ed-9e60-a846908804bc" containerName="extract-utilities" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.925726 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="64311990-e01e-4553-89da-a3c7bb54b63c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.925773 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="be031115-f6d6-44ed-9e60-a846908804bc" containerName="registry-server" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.926864 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.929832 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.930185 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.930476 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.930999 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.946732 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh"] Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.980375 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8183fd3f-f4c4-45b4-950d-c12e94455abe-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh\" (UID: \"8183fd3f-f4c4-45b4-950d-c12e94455abe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.980457 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8183fd3f-f4c4-45b4-950d-c12e94455abe-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh\" (UID: \"8183fd3f-f4c4-45b4-950d-c12e94455abe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" Jan 23 06:51:25 crc kubenswrapper[4784]: I0123 06:51:25.980580 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sz6h\" (UniqueName: \"kubernetes.io/projected/8183fd3f-f4c4-45b4-950d-c12e94455abe-kube-api-access-4sz6h\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh\" (UID: \"8183fd3f-f4c4-45b4-950d-c12e94455abe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" Jan 23 06:51:26 crc kubenswrapper[4784]: I0123 06:51:26.083652 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8183fd3f-f4c4-45b4-950d-c12e94455abe-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh\" (UID: \"8183fd3f-f4c4-45b4-950d-c12e94455abe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" Jan 23 06:51:26 crc kubenswrapper[4784]: I0123 06:51:26.083717 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8183fd3f-f4c4-45b4-950d-c12e94455abe-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh\" (UID: \"8183fd3f-f4c4-45b4-950d-c12e94455abe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" Jan 23 06:51:26 crc kubenswrapper[4784]: I0123 06:51:26.083804 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sz6h\" (UniqueName: \"kubernetes.io/projected/8183fd3f-f4c4-45b4-950d-c12e94455abe-kube-api-access-4sz6h\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh\" (UID: \"8183fd3f-f4c4-45b4-950d-c12e94455abe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" Jan 23 06:51:26 crc kubenswrapper[4784]: I0123 06:51:26.092174 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8183fd3f-f4c4-45b4-950d-c12e94455abe-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh\" (UID: \"8183fd3f-f4c4-45b4-950d-c12e94455abe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" Jan 23 06:51:26 crc kubenswrapper[4784]: I0123 06:51:26.099013 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8183fd3f-f4c4-45b4-950d-c12e94455abe-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh\" (UID: \"8183fd3f-f4c4-45b4-950d-c12e94455abe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" Jan 23 06:51:26 crc kubenswrapper[4784]: I0123 06:51:26.104229 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sz6h\" (UniqueName: \"kubernetes.io/projected/8183fd3f-f4c4-45b4-950d-c12e94455abe-kube-api-access-4sz6h\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh\" (UID: \"8183fd3f-f4c4-45b4-950d-c12e94455abe\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" Jan 23 06:51:26 crc kubenswrapper[4784]: I0123 06:51:26.254189 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:51:26 crc kubenswrapper[4784]: E0123 06:51:26.254568 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:51:26 crc kubenswrapper[4784]: I0123 06:51:26.255961 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" Jan 23 06:51:26 crc kubenswrapper[4784]: I0123 06:51:26.886338 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh"] Jan 23 06:51:27 crc kubenswrapper[4784]: I0123 06:51:27.814644 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" event={"ID":"8183fd3f-f4c4-45b4-950d-c12e94455abe","Type":"ContainerStarted","Data":"fe0e27228ba7f036df6cd483516eda52d61a399e4a9e4568a5fe072a9613fa80"} Jan 23 06:51:27 crc kubenswrapper[4784]: I0123 06:51:27.815704 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" event={"ID":"8183fd3f-f4c4-45b4-950d-c12e94455abe","Type":"ContainerStarted","Data":"482fae38d4ca458e6c4f8ea32a4641375e2a547bd3b2a5cb57f5ce329417ac76"} Jan 23 06:51:27 crc kubenswrapper[4784]: I0123 06:51:27.856110 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" podStartSLOduration=2.340840617 podStartE2EDuration="2.856080365s" podCreationTimestamp="2026-01-23 06:51:25 +0000 UTC" firstStartedPulling="2026-01-23 06:51:26.868333806 +0000 UTC m=+1890.100841780" lastFinishedPulling="2026-01-23 06:51:27.383573554 +0000 UTC m=+1890.616081528" observedRunningTime="2026-01-23 06:51:27.84405659 +0000 UTC m=+1891.076564584" watchObservedRunningTime="2026-01-23 06:51:27.856080365 +0000 UTC m=+1891.088588339" Jan 23 06:51:34 crc kubenswrapper[4784]: I0123 06:51:34.530110 4784 scope.go:117] "RemoveContainer" containerID="5e926b8bd80471188814e5a1400c0c8285188f77b62eddf099065fdb13eac7c3" Jan 23 06:51:34 crc kubenswrapper[4784]: I0123 06:51:34.557981 4784 scope.go:117] "RemoveContainer" containerID="fb78682cb5e78f2ebf2121cc2e05e3064a57e2dd9bdb95b1386a3f5a4be86a68" Jan 23 06:51:34 crc kubenswrapper[4784]: I0123 06:51:34.589424 4784 scope.go:117] "RemoveContainer" containerID="4265f0db769a3a352a0e60c5895daf630075b8977facb0b8ae6f4b62ea89b803" Jan 23 06:51:34 crc kubenswrapper[4784]: I0123 06:51:34.659908 4784 scope.go:117] "RemoveContainer" containerID="113d5bd26b46642d3068487bf0ed3e41fa897b5a0c72eb80fe257089c060e66c" Jan 23 06:51:34 crc kubenswrapper[4784]: I0123 06:51:34.729163 4784 scope.go:117] "RemoveContainer" containerID="0174c223af2d13c5f032c923ab6225eeba946f0503767b538c856787040477cf" Jan 23 06:51:34 crc kubenswrapper[4784]: I0123 06:51:34.791287 4784 scope.go:117] "RemoveContainer" containerID="efb4745b9899f5ed787d59bf4b3116a114fa21aa0e3351f3f7155317e4d54306" Jan 23 06:51:34 crc kubenswrapper[4784]: I0123 06:51:34.830848 4784 scope.go:117] "RemoveContainer" containerID="a267b004bcad066ff3307519e0042f383420edf3b98aa4a3aa2404f0beeac50c" Jan 23 06:51:34 crc kubenswrapper[4784]: I0123 06:51:34.898561 4784 scope.go:117] "RemoveContainer" containerID="bce5201a338d649c6897cd05de92f8fa4a29c986753786f3c945739266e30fc4" Jan 23 06:51:34 crc kubenswrapper[4784]: I0123 06:51:34.930609 4784 scope.go:117] "RemoveContainer" containerID="1de34ec5b1a10fdd82f326a8ba23bfb67713045202eb7f2b12ceb3f157decba8" Jan 23 06:51:34 crc kubenswrapper[4784]: I0123 06:51:34.956998 4784 scope.go:117] "RemoveContainer" containerID="b75b3e7bf5e6f4341b69a679701df366fecb241624bf2b744ba9a37364cc410e" Jan 23 06:51:34 crc kubenswrapper[4784]: I0123 06:51:34.985473 4784 scope.go:117] "RemoveContainer" containerID="575e659b056a0d140f3523036f9c949c31d7be4c8b01ce54b34960a8b27c76f6" Jan 23 06:51:35 crc kubenswrapper[4784]: I0123 06:51:35.009687 4784 scope.go:117] "RemoveContainer" containerID="80977d5826b9f086d45f13c7d275bbd2cee5caf6c832bd1b8b2f1fe171894961" Jan 23 06:51:35 crc kubenswrapper[4784]: I0123 06:51:35.038788 4784 scope.go:117] "RemoveContainer" containerID="933f6695e90ee42a3851694317be8b5def78d304fbfe7d0d6b0f20f910900f7e" Jan 23 06:51:35 crc kubenswrapper[4784]: I0123 06:51:35.070953 4784 scope.go:117] "RemoveContainer" containerID="c1ea2f099a7af139001266c2131d65ee81baada09f224f2a0c0353a50b36daee" Jan 23 06:51:35 crc kubenswrapper[4784]: I0123 06:51:35.103314 4784 scope.go:117] "RemoveContainer" containerID="2108edd466edd8637d61fa8a9ba8630f95fef7790fe21b3a047d8479558cfe3d" Jan 23 06:51:35 crc kubenswrapper[4784]: I0123 06:51:35.130213 4784 scope.go:117] "RemoveContainer" containerID="6f54f5e2e6870280636a0767f6b0ae631cba4e8e5b12c4910b6b39b14cdf5e7b" Jan 23 06:51:35 crc kubenswrapper[4784]: I0123 06:51:35.163592 4784 scope.go:117] "RemoveContainer" containerID="43f888398afecd79d9cf153ba1f77691ec6f544e6978783ea54a915703bfa839" Jan 23 06:51:35 crc kubenswrapper[4784]: I0123 06:51:35.186415 4784 scope.go:117] "RemoveContainer" containerID="1ebb00c7a08cc26d903c925f480fe8c208ade687bfc268ecd5f437818d169b45" Jan 23 06:51:35 crc kubenswrapper[4784]: I0123 06:51:35.217476 4784 scope.go:117] "RemoveContainer" containerID="b113c8d058e9a8bfa4b5384b482cb01c185bffb8c37c32fa00ba7ef9d139cc0d" Jan 23 06:51:37 crc kubenswrapper[4784]: I0123 06:51:37.262944 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:51:37 crc kubenswrapper[4784]: E0123 06:51:37.264264 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:51:44 crc kubenswrapper[4784]: I0123 06:51:44.066958 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-fsq8w"] Jan 23 06:51:44 crc kubenswrapper[4784]: I0123 06:51:44.082117 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-fsq8w"] Jan 23 06:51:45 crc kubenswrapper[4784]: I0123 06:51:45.267684 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="355a352a-3ae0-4db7-9a25-3588f4233973" path="/var/lib/kubelet/pods/355a352a-3ae0-4db7-9a25-3588f4233973/volumes" Jan 23 06:51:51 crc kubenswrapper[4784]: I0123 06:51:51.039806 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-dqv9q"] Jan 23 06:51:51 crc kubenswrapper[4784]: I0123 06:51:51.052941 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-dqv9q"] Jan 23 06:51:51 crc kubenswrapper[4784]: I0123 06:51:51.267261 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a92a258-aeae-45d3-ac60-f5d9033a0e5c" path="/var/lib/kubelet/pods/4a92a258-aeae-45d3-ac60-f5d9033a0e5c/volumes" Jan 23 06:51:52 crc kubenswrapper[4784]: I0123 06:51:52.254337 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:51:52 crc kubenswrapper[4784]: E0123 06:51:52.254784 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:52:06 crc kubenswrapper[4784]: I0123 06:52:06.254635 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:52:07 crc kubenswrapper[4784]: I0123 06:52:07.349537 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"9bd4600bcba967d7f7054c915be757b108173dd1d97a02b48ff9bdbc943173d5"} Jan 23 06:52:13 crc kubenswrapper[4784]: I0123 06:52:13.055731 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-bkcjh"] Jan 23 06:52:13 crc kubenswrapper[4784]: I0123 06:52:13.065685 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-bkcjh"] Jan 23 06:52:13 crc kubenswrapper[4784]: I0123 06:52:13.266331 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ada74437-66bf-4316-a16d-89377a5b5e41" path="/var/lib/kubelet/pods/ada74437-66bf-4316-a16d-89377a5b5e41/volumes" Jan 23 06:52:23 crc kubenswrapper[4784]: I0123 06:52:23.037831 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-cf94j"] Jan 23 06:52:23 crc kubenswrapper[4784]: I0123 06:52:23.052994 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-cf94j"] Jan 23 06:52:23 crc kubenswrapper[4784]: I0123 06:52:23.266641 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d8e7e9-165a-4248-a591-e47f1313c8d0" path="/var/lib/kubelet/pods/e5d8e7e9-165a-4248-a591-e47f1313c8d0/volumes" Jan 23 06:52:33 crc kubenswrapper[4784]: I0123 06:52:33.040324 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-pzpcf"] Jan 23 06:52:33 crc kubenswrapper[4784]: I0123 06:52:33.052938 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-pzpcf"] Jan 23 06:52:33 crc kubenswrapper[4784]: I0123 06:52:33.268694 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d58b6a2a-7217-4621-8e1a-c8297e74a086" path="/var/lib/kubelet/pods/d58b6a2a-7217-4621-8e1a-c8297e74a086/volumes" Jan 23 06:52:35 crc kubenswrapper[4784]: I0123 06:52:35.676989 4784 scope.go:117] "RemoveContainer" containerID="6d5e9fdb4563a080ef3471fe24a63735b7ccee215761c9f86b20e8a6c91f39fd" Jan 23 06:52:35 crc kubenswrapper[4784]: I0123 06:52:35.740929 4784 scope.go:117] "RemoveContainer" containerID="e68ae9ac235abbb2055e0aa5afb1be12d5913a81f097b80e4aeedf307562d8f8" Jan 23 06:52:35 crc kubenswrapper[4784]: I0123 06:52:35.792100 4784 scope.go:117] "RemoveContainer" containerID="ed9b5a18514a804502fc6eb516d4ee9fd16d688e0d6471302220d67e35cab39f" Jan 23 06:52:35 crc kubenswrapper[4784]: I0123 06:52:35.885400 4784 scope.go:117] "RemoveContainer" containerID="ac5813abde32d54186db6d3ca0af0b6805c61158c989e027279cdd483152c8e0" Jan 23 06:52:35 crc kubenswrapper[4784]: I0123 06:52:35.926391 4784 scope.go:117] "RemoveContainer" containerID="5106bfcf0e4ae760d500cace7f3a85f1a6c5944ec65d8337657cbed981815e01" Jan 23 06:52:36 crc kubenswrapper[4784]: I0123 06:52:36.056332 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-j4q25"] Jan 23 06:52:36 crc kubenswrapper[4784]: I0123 06:52:36.071213 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-j4q25"] Jan 23 06:52:37 crc kubenswrapper[4784]: I0123 06:52:37.273382 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9cb908c-22d4-4554-b394-68e4e32793f3" path="/var/lib/kubelet/pods/f9cb908c-22d4-4554-b394-68e4e32793f3/volumes" Jan 23 06:52:51 crc kubenswrapper[4784]: I0123 06:52:51.058222 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-g49wt"] Jan 23 06:52:51 crc kubenswrapper[4784]: I0123 06:52:51.065440 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-g49wt"] Jan 23 06:52:51 crc kubenswrapper[4784]: I0123 06:52:51.267834 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d192b60c-bc41-4f7d-9c61-2748ad0f8a7f" path="/var/lib/kubelet/pods/d192b60c-bc41-4f7d-9c61-2748ad0f8a7f/volumes" Jan 23 06:52:53 crc kubenswrapper[4784]: I0123 06:52:53.043466 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-tvpzc"] Jan 23 06:52:53 crc kubenswrapper[4784]: I0123 06:52:53.055386 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-tvpzc"] Jan 23 06:52:53 crc kubenswrapper[4784]: I0123 06:52:53.268602 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e52f206e-7230-4c60-a8c2-ad6cebabc434" path="/var/lib/kubelet/pods/e52f206e-7230-4c60-a8c2-ad6cebabc434/volumes" Jan 23 06:53:22 crc kubenswrapper[4784]: I0123 06:53:22.262607 4784 generic.go:334] "Generic (PLEG): container finished" podID="8183fd3f-f4c4-45b4-950d-c12e94455abe" containerID="fe0e27228ba7f036df6cd483516eda52d61a399e4a9e4568a5fe072a9613fa80" exitCode=0 Jan 23 06:53:22 crc kubenswrapper[4784]: I0123 06:53:22.262681 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" event={"ID":"8183fd3f-f4c4-45b4-950d-c12e94455abe","Type":"ContainerDied","Data":"fe0e27228ba7f036df6cd483516eda52d61a399e4a9e4568a5fe072a9613fa80"} Jan 23 06:53:23 crc kubenswrapper[4784]: I0123 06:53:23.776929 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" Jan 23 06:53:23 crc kubenswrapper[4784]: I0123 06:53:23.946545 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8183fd3f-f4c4-45b4-950d-c12e94455abe-inventory\") pod \"8183fd3f-f4c4-45b4-950d-c12e94455abe\" (UID: \"8183fd3f-f4c4-45b4-950d-c12e94455abe\") " Jan 23 06:53:23 crc kubenswrapper[4784]: I0123 06:53:23.946615 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8183fd3f-f4c4-45b4-950d-c12e94455abe-ssh-key-openstack-edpm-ipam\") pod \"8183fd3f-f4c4-45b4-950d-c12e94455abe\" (UID: \"8183fd3f-f4c4-45b4-950d-c12e94455abe\") " Jan 23 06:53:23 crc kubenswrapper[4784]: I0123 06:53:23.946729 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sz6h\" (UniqueName: \"kubernetes.io/projected/8183fd3f-f4c4-45b4-950d-c12e94455abe-kube-api-access-4sz6h\") pod \"8183fd3f-f4c4-45b4-950d-c12e94455abe\" (UID: \"8183fd3f-f4c4-45b4-950d-c12e94455abe\") " Jan 23 06:53:23 crc kubenswrapper[4784]: I0123 06:53:23.954729 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8183fd3f-f4c4-45b4-950d-c12e94455abe-kube-api-access-4sz6h" (OuterVolumeSpecName: "kube-api-access-4sz6h") pod "8183fd3f-f4c4-45b4-950d-c12e94455abe" (UID: "8183fd3f-f4c4-45b4-950d-c12e94455abe"). InnerVolumeSpecName "kube-api-access-4sz6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:23 crc kubenswrapper[4784]: I0123 06:53:23.981415 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8183fd3f-f4c4-45b4-950d-c12e94455abe-inventory" (OuterVolumeSpecName: "inventory") pod "8183fd3f-f4c4-45b4-950d-c12e94455abe" (UID: "8183fd3f-f4c4-45b4-950d-c12e94455abe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:53:23 crc kubenswrapper[4784]: I0123 06:53:23.986031 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8183fd3f-f4c4-45b4-950d-c12e94455abe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8183fd3f-f4c4-45b4-950d-c12e94455abe" (UID: "8183fd3f-f4c4-45b4-950d-c12e94455abe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.050318 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8183fd3f-f4c4-45b4-950d-c12e94455abe-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.050836 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8183fd3f-f4c4-45b4-950d-c12e94455abe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.050852 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sz6h\" (UniqueName: \"kubernetes.io/projected/8183fd3f-f4c4-45b4-950d-c12e94455abe-kube-api-access-4sz6h\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.287018 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" event={"ID":"8183fd3f-f4c4-45b4-950d-c12e94455abe","Type":"ContainerDied","Data":"482fae38d4ca458e6c4f8ea32a4641375e2a547bd3b2a5cb57f5ce329417ac76"} Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.287088 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.287108 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="482fae38d4ca458e6c4f8ea32a4641375e2a547bd3b2a5cb57f5ce329417ac76" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.383875 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4"] Jan 23 06:53:24 crc kubenswrapper[4784]: E0123 06:53:24.384528 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8183fd3f-f4c4-45b4-950d-c12e94455abe" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.384556 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8183fd3f-f4c4-45b4-950d-c12e94455abe" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.384865 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="8183fd3f-f4c4-45b4-950d-c12e94455abe" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.386154 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.388824 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.388836 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.389211 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.389700 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.405065 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4"] Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.463855 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c868e7c-e48a-4534-a594-a785fcd2e39e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4\" (UID: \"5c868e7c-e48a-4534-a594-a785fcd2e39e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.463930 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg8mm\" (UniqueName: \"kubernetes.io/projected/5c868e7c-e48a-4534-a594-a785fcd2e39e-kube-api-access-bg8mm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4\" (UID: \"5c868e7c-e48a-4534-a594-a785fcd2e39e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.463978 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c868e7c-e48a-4534-a594-a785fcd2e39e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4\" (UID: \"5c868e7c-e48a-4534-a594-a785fcd2e39e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.566403 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c868e7c-e48a-4534-a594-a785fcd2e39e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4\" (UID: \"5c868e7c-e48a-4534-a594-a785fcd2e39e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.566475 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg8mm\" (UniqueName: \"kubernetes.io/projected/5c868e7c-e48a-4534-a594-a785fcd2e39e-kube-api-access-bg8mm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4\" (UID: \"5c868e7c-e48a-4534-a594-a785fcd2e39e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.566518 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c868e7c-e48a-4534-a594-a785fcd2e39e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4\" (UID: \"5c868e7c-e48a-4534-a594-a785fcd2e39e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.572240 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c868e7c-e48a-4534-a594-a785fcd2e39e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4\" (UID: \"5c868e7c-e48a-4534-a594-a785fcd2e39e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.572278 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c868e7c-e48a-4534-a594-a785fcd2e39e-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4\" (UID: \"5c868e7c-e48a-4534-a594-a785fcd2e39e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.586536 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg8mm\" (UniqueName: \"kubernetes.io/projected/5c868e7c-e48a-4534-a594-a785fcd2e39e-kube-api-access-bg8mm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4\" (UID: \"5c868e7c-e48a-4534-a594-a785fcd2e39e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" Jan 23 06:53:24 crc kubenswrapper[4784]: I0123 06:53:24.722154 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" Jan 23 06:53:25 crc kubenswrapper[4784]: I0123 06:53:25.362949 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4"] Jan 23 06:53:25 crc kubenswrapper[4784]: W0123 06:53:25.373792 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c868e7c_e48a_4534_a594_a785fcd2e39e.slice/crio-287a0c1b40c0f91871742de18442897b8d8d8f33e51191d558b7fc45272bb280 WatchSource:0}: Error finding container 287a0c1b40c0f91871742de18442897b8d8d8f33e51191d558b7fc45272bb280: Status 404 returned error can't find the container with id 287a0c1b40c0f91871742de18442897b8d8d8f33e51191d558b7fc45272bb280 Jan 23 06:53:25 crc kubenswrapper[4784]: I0123 06:53:25.379127 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 06:53:26 crc kubenswrapper[4784]: I0123 06:53:26.316359 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" event={"ID":"5c868e7c-e48a-4534-a594-a785fcd2e39e","Type":"ContainerStarted","Data":"287a0c1b40c0f91871742de18442897b8d8d8f33e51191d558b7fc45272bb280"} Jan 23 06:53:27 crc kubenswrapper[4784]: I0123 06:53:27.336559 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" event={"ID":"5c868e7c-e48a-4534-a594-a785fcd2e39e","Type":"ContainerStarted","Data":"cc12cbf4f7bec374af0a7a010672d4e957ef38ce8659d335a6127e6be316d908"} Jan 23 06:53:27 crc kubenswrapper[4784]: I0123 06:53:27.366424 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" podStartSLOduration=2.883900123 podStartE2EDuration="3.366397159s" podCreationTimestamp="2026-01-23 06:53:24 +0000 UTC" firstStartedPulling="2026-01-23 06:53:25.378661761 +0000 UTC m=+2008.611169735" lastFinishedPulling="2026-01-23 06:53:25.861158797 +0000 UTC m=+2009.093666771" observedRunningTime="2026-01-23 06:53:27.360542736 +0000 UTC m=+2010.593050730" watchObservedRunningTime="2026-01-23 06:53:27.366397159 +0000 UTC m=+2010.598905123" Jan 23 06:53:36 crc kubenswrapper[4784]: I0123 06:53:36.121274 4784 scope.go:117] "RemoveContainer" containerID="e860af3f0d45f5d944974d7e993679f89e0bde07ecad65d242b18270fcb996a2" Jan 23 06:53:36 crc kubenswrapper[4784]: I0123 06:53:36.155211 4784 scope.go:117] "RemoveContainer" containerID="853d5e9d25ae13526c1352113d4d9952d243c46b955a65b89a06f35d4b1470dd" Jan 23 06:53:36 crc kubenswrapper[4784]: I0123 06:53:36.182977 4784 scope.go:117] "RemoveContainer" containerID="a975ce331cc7827d3fa606c590f022a940697b4732bf746faba00bf4b6a3e3d3" Jan 23 06:53:36 crc kubenswrapper[4784]: I0123 06:53:36.264098 4784 scope.go:117] "RemoveContainer" containerID="d50e7c0e88ae98b6a02097aa02bd8ed3b1d22b945cc906f3d3700e2aec4afc9f" Jan 23 06:53:36 crc kubenswrapper[4784]: I0123 06:53:36.348609 4784 scope.go:117] "RemoveContainer" containerID="073776a870466ed2af0bc20d6315b03a9d062d43d0d5545bfe815974d5bd1f72" Jan 23 06:53:36 crc kubenswrapper[4784]: I0123 06:53:36.404069 4784 scope.go:117] "RemoveContainer" containerID="c692803d50a9ab6d420bbd22e6b0cd4a2e3e2c1935e9a7fdef361916b215416c" Jan 23 06:53:41 crc kubenswrapper[4784]: I0123 06:53:41.084010 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b75f-account-create-update-nlbg8"] Jan 23 06:53:41 crc kubenswrapper[4784]: I0123 06:53:41.099470 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-q7h5p"] Jan 23 06:53:41 crc kubenswrapper[4784]: I0123 06:53:41.112489 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-z6kmb"] Jan 23 06:53:41 crc kubenswrapper[4784]: I0123 06:53:41.126662 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-q7h5p"] Jan 23 06:53:41 crc kubenswrapper[4784]: I0123 06:53:41.139671 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-z6kmb"] Jan 23 06:53:41 crc kubenswrapper[4784]: I0123 06:53:41.149509 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b75f-account-create-update-nlbg8"] Jan 23 06:53:41 crc kubenswrapper[4784]: I0123 06:53:41.281230 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c060372-b812-4f94-90c1-b87a4a20c12e" path="/var/lib/kubelet/pods/1c060372-b812-4f94-90c1-b87a4a20c12e/volumes" Jan 23 06:53:41 crc kubenswrapper[4784]: I0123 06:53:41.281947 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8576e5ec-00dc-45b9-93b2-b76f32e3e92d" path="/var/lib/kubelet/pods/8576e5ec-00dc-45b9-93b2-b76f32e3e92d/volumes" Jan 23 06:53:41 crc kubenswrapper[4784]: I0123 06:53:41.282543 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea803a89-1983-44be-bf13-ac41e92eec7e" path="/var/lib/kubelet/pods/ea803a89-1983-44be-bf13-ac41e92eec7e/volumes" Jan 23 06:53:46 crc kubenswrapper[4784]: I0123 06:53:46.038359 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-hklgl"] Jan 23 06:53:46 crc kubenswrapper[4784]: I0123 06:53:46.052274 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-hklgl"] Jan 23 06:53:47 crc kubenswrapper[4784]: I0123 06:53:47.041090 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1458-account-create-update-v72tc"] Jan 23 06:53:47 crc kubenswrapper[4784]: I0123 06:53:47.055672 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-43e3-account-create-update-9c2q2"] Jan 23 06:53:47 crc kubenswrapper[4784]: I0123 06:53:47.063058 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1458-account-create-update-v72tc"] Jan 23 06:53:47 crc kubenswrapper[4784]: I0123 06:53:47.076509 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-43e3-account-create-update-9c2q2"] Jan 23 06:53:47 crc kubenswrapper[4784]: I0123 06:53:47.266407 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35725aa2-6c23-4676-a612-b169efb88e5b" path="/var/lib/kubelet/pods/35725aa2-6c23-4676-a612-b169efb88e5b/volumes" Jan 23 06:53:47 crc kubenswrapper[4784]: I0123 06:53:47.267359 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="384a5279-9005-4fd7-882e-e14349adfe06" path="/var/lib/kubelet/pods/384a5279-9005-4fd7-882e-e14349adfe06/volumes" Jan 23 06:53:47 crc kubenswrapper[4784]: I0123 06:53:47.268194 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c136c6-ca42-4080-ac37-582e3e86847f" path="/var/lib/kubelet/pods/a6c136c6-ca42-4080-ac37-582e3e86847f/volumes" Jan 23 06:54:22 crc kubenswrapper[4784]: I0123 06:54:22.051061 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xv9ck"] Jan 23 06:54:22 crc kubenswrapper[4784]: I0123 06:54:22.075604 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xv9ck"] Jan 23 06:54:23 crc kubenswrapper[4784]: I0123 06:54:23.287791 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27b495cf-9626-42ed-ad77-e58aadea9973" path="/var/lib/kubelet/pods/27b495cf-9626-42ed-ad77-e58aadea9973/volumes" Jan 23 06:54:23 crc kubenswrapper[4784]: I0123 06:54:23.603805 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:54:23 crc kubenswrapper[4784]: I0123 06:54:23.603907 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:54:36 crc kubenswrapper[4784]: I0123 06:54:36.616785 4784 scope.go:117] "RemoveContainer" containerID="dfb52d448c5e9d801573262cc204f60db14b6d457e22580adea27afa008f2401" Jan 23 06:54:36 crc kubenswrapper[4784]: I0123 06:54:36.662797 4784 scope.go:117] "RemoveContainer" containerID="8ef82e05c36bb7d3fb3d11394393c8ee251546855b91bc2e56368ee9d2c74116" Jan 23 06:54:36 crc kubenswrapper[4784]: I0123 06:54:36.715362 4784 scope.go:117] "RemoveContainer" containerID="d6f1f280bdd9658fb63118eb1be953d286c228ebaccc4e7732a2527be84f7df3" Jan 23 06:54:36 crc kubenswrapper[4784]: I0123 06:54:36.770804 4784 scope.go:117] "RemoveContainer" containerID="6b93e36a22a950541a42251c9be727e7ff4492866306647db8fddc74b9c95e6d" Jan 23 06:54:36 crc kubenswrapper[4784]: I0123 06:54:36.819946 4784 scope.go:117] "RemoveContainer" containerID="ca9ab8623cbdfb7e71bc4c9f5cb4608a5b0db8854890781ddd56244a030d3b7e" Jan 23 06:54:36 crc kubenswrapper[4784]: I0123 06:54:36.872073 4784 scope.go:117] "RemoveContainer" containerID="679700483f426dcd81199f44c303def09c81e5f9f8be5981ae78876a890280cd" Jan 23 06:54:36 crc kubenswrapper[4784]: I0123 06:54:36.955382 4784 scope.go:117] "RemoveContainer" containerID="101cf29ae09d0239b57abaf7afba3ce1d158cb25ddbf52892ba3f5f01453dc45" Jan 23 06:54:49 crc kubenswrapper[4784]: E0123 06:54:49.466977 4784 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c868e7c_e48a_4534_a594_a785fcd2e39e.slice/crio-cc12cbf4f7bec374af0a7a010672d4e957ef38ce8659d335a6127e6be316d908.scope\": RecentStats: unable to find data in memory cache]" Jan 23 06:54:50 crc kubenswrapper[4784]: I0123 06:54:50.265094 4784 generic.go:334] "Generic (PLEG): container finished" podID="5c868e7c-e48a-4534-a594-a785fcd2e39e" containerID="cc12cbf4f7bec374af0a7a010672d4e957ef38ce8659d335a6127e6be316d908" exitCode=0 Jan 23 06:54:50 crc kubenswrapper[4784]: I0123 06:54:50.265162 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" event={"ID":"5c868e7c-e48a-4534-a594-a785fcd2e39e","Type":"ContainerDied","Data":"cc12cbf4f7bec374af0a7a010672d4e957ef38ce8659d335a6127e6be316d908"} Jan 23 06:54:51 crc kubenswrapper[4784]: I0123 06:54:51.049505 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-dtt27"] Jan 23 06:54:51 crc kubenswrapper[4784]: I0123 06:54:51.061282 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-dtt27"] Jan 23 06:54:51 crc kubenswrapper[4784]: I0123 06:54:51.270477 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9104535-ee58-4cc4-ac36-18a922118bed" path="/var/lib/kubelet/pods/c9104535-ee58-4cc4-ac36-18a922118bed/volumes" Jan 23 06:54:51 crc kubenswrapper[4784]: I0123 06:54:51.733278 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" Jan 23 06:54:51 crc kubenswrapper[4784]: I0123 06:54:51.892630 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg8mm\" (UniqueName: \"kubernetes.io/projected/5c868e7c-e48a-4534-a594-a785fcd2e39e-kube-api-access-bg8mm\") pod \"5c868e7c-e48a-4534-a594-a785fcd2e39e\" (UID: \"5c868e7c-e48a-4534-a594-a785fcd2e39e\") " Jan 23 06:54:51 crc kubenswrapper[4784]: I0123 06:54:51.893289 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c868e7c-e48a-4534-a594-a785fcd2e39e-inventory\") pod \"5c868e7c-e48a-4534-a594-a785fcd2e39e\" (UID: \"5c868e7c-e48a-4534-a594-a785fcd2e39e\") " Jan 23 06:54:51 crc kubenswrapper[4784]: I0123 06:54:51.893354 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c868e7c-e48a-4534-a594-a785fcd2e39e-ssh-key-openstack-edpm-ipam\") pod \"5c868e7c-e48a-4534-a594-a785fcd2e39e\" (UID: \"5c868e7c-e48a-4534-a594-a785fcd2e39e\") " Jan 23 06:54:51 crc kubenswrapper[4784]: I0123 06:54:51.905335 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c868e7c-e48a-4534-a594-a785fcd2e39e-kube-api-access-bg8mm" (OuterVolumeSpecName: "kube-api-access-bg8mm") pod "5c868e7c-e48a-4534-a594-a785fcd2e39e" (UID: "5c868e7c-e48a-4534-a594-a785fcd2e39e"). InnerVolumeSpecName "kube-api-access-bg8mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:54:51 crc kubenswrapper[4784]: I0123 06:54:51.928253 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c868e7c-e48a-4534-a594-a785fcd2e39e-inventory" (OuterVolumeSpecName: "inventory") pod "5c868e7c-e48a-4534-a594-a785fcd2e39e" (UID: "5c868e7c-e48a-4534-a594-a785fcd2e39e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:54:51 crc kubenswrapper[4784]: I0123 06:54:51.931087 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c868e7c-e48a-4534-a594-a785fcd2e39e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5c868e7c-e48a-4534-a594-a785fcd2e39e" (UID: "5c868e7c-e48a-4534-a594-a785fcd2e39e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:54:51 crc kubenswrapper[4784]: I0123 06:54:51.996639 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c868e7c-e48a-4534-a594-a785fcd2e39e-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:51 crc kubenswrapper[4784]: I0123 06:54:51.996713 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c868e7c-e48a-4534-a594-a785fcd2e39e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:51 crc kubenswrapper[4784]: I0123 06:54:51.996727 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg8mm\" (UniqueName: \"kubernetes.io/projected/5c868e7c-e48a-4534-a594-a785fcd2e39e-kube-api-access-bg8mm\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.041184 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t4pnl"] Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.054892 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t4pnl"] Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.289523 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" event={"ID":"5c868e7c-e48a-4534-a594-a785fcd2e39e","Type":"ContainerDied","Data":"287a0c1b40c0f91871742de18442897b8d8d8f33e51191d558b7fc45272bb280"} Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.289583 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287a0c1b40c0f91871742de18442897b8d8d8f33e51191d558b7fc45272bb280" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.289671 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.403627 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd"] Jan 23 06:54:52 crc kubenswrapper[4784]: E0123 06:54:52.404436 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c868e7c-e48a-4534-a594-a785fcd2e39e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.404475 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c868e7c-e48a-4534-a594-a785fcd2e39e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.404699 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c868e7c-e48a-4534-a594-a785fcd2e39e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.406042 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.407776 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bbb9\" (UniqueName: \"kubernetes.io/projected/0199cc2a-5880-4f6e-b157-23bf20f33487-kube-api-access-7bbb9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd\" (UID: \"0199cc2a-5880-4f6e-b157-23bf20f33487\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.407983 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0199cc2a-5880-4f6e-b157-23bf20f33487-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd\" (UID: \"0199cc2a-5880-4f6e-b157-23bf20f33487\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.408261 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0199cc2a-5880-4f6e-b157-23bf20f33487-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd\" (UID: \"0199cc2a-5880-4f6e-b157-23bf20f33487\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.409066 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.409318 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.409356 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.409470 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.415814 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd"] Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.511217 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bbb9\" (UniqueName: \"kubernetes.io/projected/0199cc2a-5880-4f6e-b157-23bf20f33487-kube-api-access-7bbb9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd\" (UID: \"0199cc2a-5880-4f6e-b157-23bf20f33487\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.511456 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0199cc2a-5880-4f6e-b157-23bf20f33487-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd\" (UID: \"0199cc2a-5880-4f6e-b157-23bf20f33487\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.511772 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0199cc2a-5880-4f6e-b157-23bf20f33487-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd\" (UID: \"0199cc2a-5880-4f6e-b157-23bf20f33487\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.518145 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0199cc2a-5880-4f6e-b157-23bf20f33487-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd\" (UID: \"0199cc2a-5880-4f6e-b157-23bf20f33487\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.523501 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0199cc2a-5880-4f6e-b157-23bf20f33487-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd\" (UID: \"0199cc2a-5880-4f6e-b157-23bf20f33487\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.531281 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bbb9\" (UniqueName: \"kubernetes.io/projected/0199cc2a-5880-4f6e-b157-23bf20f33487-kube-api-access-7bbb9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd\" (UID: \"0199cc2a-5880-4f6e-b157-23bf20f33487\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" Jan 23 06:54:52 crc kubenswrapper[4784]: I0123 06:54:52.740666 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" Jan 23 06:54:53 crc kubenswrapper[4784]: I0123 06:54:53.266524 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95c9045d-accf-4fe6-b22a-1b9cee39a56c" path="/var/lib/kubelet/pods/95c9045d-accf-4fe6-b22a-1b9cee39a56c/volumes" Jan 23 06:54:53 crc kubenswrapper[4784]: I0123 06:54:53.330195 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd"] Jan 23 06:54:53 crc kubenswrapper[4784]: I0123 06:54:53.603676 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:54:53 crc kubenswrapper[4784]: I0123 06:54:53.604311 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:54:54 crc kubenswrapper[4784]: I0123 06:54:54.314294 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" event={"ID":"0199cc2a-5880-4f6e-b157-23bf20f33487","Type":"ContainerStarted","Data":"604d94cd096e784dd8e9855cb29d8c5f1cad46b20b2ef9a65295cbc1a05221f1"} Jan 23 06:54:54 crc kubenswrapper[4784]: I0123 06:54:54.314835 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" event={"ID":"0199cc2a-5880-4f6e-b157-23bf20f33487","Type":"ContainerStarted","Data":"d20b0c92003d93dec1fd6caf8aa37f5f5475f455091c162f2e3ce87c62aa545f"} Jan 23 06:55:00 crc kubenswrapper[4784]: I0123 06:55:00.382030 4784 generic.go:334] "Generic (PLEG): container finished" podID="0199cc2a-5880-4f6e-b157-23bf20f33487" containerID="604d94cd096e784dd8e9855cb29d8c5f1cad46b20b2ef9a65295cbc1a05221f1" exitCode=0 Jan 23 06:55:00 crc kubenswrapper[4784]: I0123 06:55:00.382134 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" event={"ID":"0199cc2a-5880-4f6e-b157-23bf20f33487","Type":"ContainerDied","Data":"604d94cd096e784dd8e9855cb29d8c5f1cad46b20b2ef9a65295cbc1a05221f1"} Jan 23 06:55:01 crc kubenswrapper[4784]: I0123 06:55:01.825399 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" Jan 23 06:55:01 crc kubenswrapper[4784]: I0123 06:55:01.970301 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0199cc2a-5880-4f6e-b157-23bf20f33487-ssh-key-openstack-edpm-ipam\") pod \"0199cc2a-5880-4f6e-b157-23bf20f33487\" (UID: \"0199cc2a-5880-4f6e-b157-23bf20f33487\") " Jan 23 06:55:01 crc kubenswrapper[4784]: I0123 06:55:01.970400 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bbb9\" (UniqueName: \"kubernetes.io/projected/0199cc2a-5880-4f6e-b157-23bf20f33487-kube-api-access-7bbb9\") pod \"0199cc2a-5880-4f6e-b157-23bf20f33487\" (UID: \"0199cc2a-5880-4f6e-b157-23bf20f33487\") " Jan 23 06:55:01 crc kubenswrapper[4784]: I0123 06:55:01.970587 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0199cc2a-5880-4f6e-b157-23bf20f33487-inventory\") pod \"0199cc2a-5880-4f6e-b157-23bf20f33487\" (UID: \"0199cc2a-5880-4f6e-b157-23bf20f33487\") " Jan 23 06:55:01 crc kubenswrapper[4784]: I0123 06:55:01.980816 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0199cc2a-5880-4f6e-b157-23bf20f33487-kube-api-access-7bbb9" (OuterVolumeSpecName: "kube-api-access-7bbb9") pod "0199cc2a-5880-4f6e-b157-23bf20f33487" (UID: "0199cc2a-5880-4f6e-b157-23bf20f33487"). InnerVolumeSpecName "kube-api-access-7bbb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.007098 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0199cc2a-5880-4f6e-b157-23bf20f33487-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0199cc2a-5880-4f6e-b157-23bf20f33487" (UID: "0199cc2a-5880-4f6e-b157-23bf20f33487"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.007260 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0199cc2a-5880-4f6e-b157-23bf20f33487-inventory" (OuterVolumeSpecName: "inventory") pod "0199cc2a-5880-4f6e-b157-23bf20f33487" (UID: "0199cc2a-5880-4f6e-b157-23bf20f33487"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.071624 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0199cc2a-5880-4f6e-b157-23bf20f33487-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.071710 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0199cc2a-5880-4f6e-b157-23bf20f33487-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.071793 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bbb9\" (UniqueName: \"kubernetes.io/projected/0199cc2a-5880-4f6e-b157-23bf20f33487-kube-api-access-7bbb9\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.406084 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" event={"ID":"0199cc2a-5880-4f6e-b157-23bf20f33487","Type":"ContainerDied","Data":"d20b0c92003d93dec1fd6caf8aa37f5f5475f455091c162f2e3ce87c62aa545f"} Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.406644 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d20b0c92003d93dec1fd6caf8aa37f5f5475f455091c162f2e3ce87c62aa545f" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.406267 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.494933 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt"] Jan 23 06:55:02 crc kubenswrapper[4784]: E0123 06:55:02.495592 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0199cc2a-5880-4f6e-b157-23bf20f33487" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.495624 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0199cc2a-5880-4f6e-b157-23bf20f33487" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.495916 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0199cc2a-5880-4f6e-b157-23bf20f33487" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.496924 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.500281 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.500554 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.500774 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.509472 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.509800 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt"] Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.583971 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0402642d-23da-49d9-9175-8bff0326b7fd-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4vzgt\" (UID: \"0402642d-23da-49d9-9175-8bff0326b7fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.584106 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0402642d-23da-49d9-9175-8bff0326b7fd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4vzgt\" (UID: \"0402642d-23da-49d9-9175-8bff0326b7fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.584145 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkptb\" (UniqueName: \"kubernetes.io/projected/0402642d-23da-49d9-9175-8bff0326b7fd-kube-api-access-fkptb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4vzgt\" (UID: \"0402642d-23da-49d9-9175-8bff0326b7fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.686474 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0402642d-23da-49d9-9175-8bff0326b7fd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4vzgt\" (UID: \"0402642d-23da-49d9-9175-8bff0326b7fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.686528 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkptb\" (UniqueName: \"kubernetes.io/projected/0402642d-23da-49d9-9175-8bff0326b7fd-kube-api-access-fkptb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4vzgt\" (UID: \"0402642d-23da-49d9-9175-8bff0326b7fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.686655 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0402642d-23da-49d9-9175-8bff0326b7fd-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4vzgt\" (UID: \"0402642d-23da-49d9-9175-8bff0326b7fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.696209 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0402642d-23da-49d9-9175-8bff0326b7fd-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4vzgt\" (UID: \"0402642d-23da-49d9-9175-8bff0326b7fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.703523 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0402642d-23da-49d9-9175-8bff0326b7fd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4vzgt\" (UID: \"0402642d-23da-49d9-9175-8bff0326b7fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.709881 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkptb\" (UniqueName: \"kubernetes.io/projected/0402642d-23da-49d9-9175-8bff0326b7fd-kube-api-access-fkptb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4vzgt\" (UID: \"0402642d-23da-49d9-9175-8bff0326b7fd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" Jan 23 06:55:02 crc kubenswrapper[4784]: I0123 06:55:02.815573 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" Jan 23 06:55:03 crc kubenswrapper[4784]: I0123 06:55:03.405446 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt"] Jan 23 06:55:04 crc kubenswrapper[4784]: I0123 06:55:04.439424 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" event={"ID":"0402642d-23da-49d9-9175-8bff0326b7fd","Type":"ContainerStarted","Data":"163b9134cd04b17cc4d77e359eeff351fdc01957926b78fe3a6948f6e7eedbb5"} Jan 23 06:55:04 crc kubenswrapper[4784]: I0123 06:55:04.440671 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" event={"ID":"0402642d-23da-49d9-9175-8bff0326b7fd","Type":"ContainerStarted","Data":"e10b9528fecc05ea52880b45d96ae7d51c2e3860f45ef2385e3c5d3e0cf866fb"} Jan 23 06:55:04 crc kubenswrapper[4784]: I0123 06:55:04.489437 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" podStartSLOduration=2.035409855 podStartE2EDuration="2.489386563s" podCreationTimestamp="2026-01-23 06:55:02 +0000 UTC" firstStartedPulling="2026-01-23 06:55:03.411351691 +0000 UTC m=+2106.643859665" lastFinishedPulling="2026-01-23 06:55:03.865328399 +0000 UTC m=+2107.097836373" observedRunningTime="2026-01-23 06:55:04.47132405 +0000 UTC m=+2107.703832034" watchObservedRunningTime="2026-01-23 06:55:04.489386563 +0000 UTC m=+2107.721894537" Jan 23 06:55:19 crc kubenswrapper[4784]: I0123 06:55:19.097831 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wdqnv"] Jan 23 06:55:19 crc kubenswrapper[4784]: I0123 06:55:19.102004 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:19 crc kubenswrapper[4784]: I0123 06:55:19.105998 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c583d3b2-20ea-459b-9887-6c8433b4b5c5-utilities\") pod \"redhat-operators-wdqnv\" (UID: \"c583d3b2-20ea-459b-9887-6c8433b4b5c5\") " pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:19 crc kubenswrapper[4784]: I0123 06:55:19.106195 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jffrr\" (UniqueName: \"kubernetes.io/projected/c583d3b2-20ea-459b-9887-6c8433b4b5c5-kube-api-access-jffrr\") pod \"redhat-operators-wdqnv\" (UID: \"c583d3b2-20ea-459b-9887-6c8433b4b5c5\") " pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:19 crc kubenswrapper[4784]: I0123 06:55:19.106534 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c583d3b2-20ea-459b-9887-6c8433b4b5c5-catalog-content\") pod \"redhat-operators-wdqnv\" (UID: \"c583d3b2-20ea-459b-9887-6c8433b4b5c5\") " pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:19 crc kubenswrapper[4784]: I0123 06:55:19.114315 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wdqnv"] Jan 23 06:55:19 crc kubenswrapper[4784]: I0123 06:55:19.208199 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c583d3b2-20ea-459b-9887-6c8433b4b5c5-catalog-content\") pod \"redhat-operators-wdqnv\" (UID: \"c583d3b2-20ea-459b-9887-6c8433b4b5c5\") " pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:19 crc kubenswrapper[4784]: I0123 06:55:19.208316 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c583d3b2-20ea-459b-9887-6c8433b4b5c5-utilities\") pod \"redhat-operators-wdqnv\" (UID: \"c583d3b2-20ea-459b-9887-6c8433b4b5c5\") " pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:19 crc kubenswrapper[4784]: I0123 06:55:19.208362 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jffrr\" (UniqueName: \"kubernetes.io/projected/c583d3b2-20ea-459b-9887-6c8433b4b5c5-kube-api-access-jffrr\") pod \"redhat-operators-wdqnv\" (UID: \"c583d3b2-20ea-459b-9887-6c8433b4b5c5\") " pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:19 crc kubenswrapper[4784]: I0123 06:55:19.209012 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c583d3b2-20ea-459b-9887-6c8433b4b5c5-catalog-content\") pod \"redhat-operators-wdqnv\" (UID: \"c583d3b2-20ea-459b-9887-6c8433b4b5c5\") " pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:19 crc kubenswrapper[4784]: I0123 06:55:19.209106 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c583d3b2-20ea-459b-9887-6c8433b4b5c5-utilities\") pod \"redhat-operators-wdqnv\" (UID: \"c583d3b2-20ea-459b-9887-6c8433b4b5c5\") " pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:19 crc kubenswrapper[4784]: I0123 06:55:19.232899 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jffrr\" (UniqueName: \"kubernetes.io/projected/c583d3b2-20ea-459b-9887-6c8433b4b5c5-kube-api-access-jffrr\") pod \"redhat-operators-wdqnv\" (UID: \"c583d3b2-20ea-459b-9887-6c8433b4b5c5\") " pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:19 crc kubenswrapper[4784]: I0123 06:55:19.441207 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:19 crc kubenswrapper[4784]: I0123 06:55:19.974466 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wdqnv"] Jan 23 06:55:20 crc kubenswrapper[4784]: I0123 06:55:20.612364 4784 generic.go:334] "Generic (PLEG): container finished" podID="c583d3b2-20ea-459b-9887-6c8433b4b5c5" containerID="9c5d0f841fd9052aa874b575f5440b367304ec4810b70245197a0b4b3186fe6f" exitCode=0 Jan 23 06:55:20 crc kubenswrapper[4784]: I0123 06:55:20.612602 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdqnv" event={"ID":"c583d3b2-20ea-459b-9887-6c8433b4b5c5","Type":"ContainerDied","Data":"9c5d0f841fd9052aa874b575f5440b367304ec4810b70245197a0b4b3186fe6f"} Jan 23 06:55:20 crc kubenswrapper[4784]: I0123 06:55:20.612824 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdqnv" event={"ID":"c583d3b2-20ea-459b-9887-6c8433b4b5c5","Type":"ContainerStarted","Data":"acb2626ffa460c4fa1823c2a544ac6eb36b6d0ef0d5f9606882d54134e287bda"} Jan 23 06:55:21 crc kubenswrapper[4784]: I0123 06:55:21.625882 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdqnv" event={"ID":"c583d3b2-20ea-459b-9887-6c8433b4b5c5","Type":"ContainerStarted","Data":"4c1716efde84e29e3c1ed4ec9fe863eec10146410d870d12a6f8bebc67cc4503"} Jan 23 06:55:23 crc kubenswrapper[4784]: I0123 06:55:23.604032 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:55:23 crc kubenswrapper[4784]: I0123 06:55:23.604638 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:55:23 crc kubenswrapper[4784]: I0123 06:55:23.604727 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:55:23 crc kubenswrapper[4784]: I0123 06:55:23.606437 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9bd4600bcba967d7f7054c915be757b108173dd1d97a02b48ff9bdbc943173d5"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:55:23 crc kubenswrapper[4784]: I0123 06:55:23.606584 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://9bd4600bcba967d7f7054c915be757b108173dd1d97a02b48ff9bdbc943173d5" gracePeriod=600 Jan 23 06:55:24 crc kubenswrapper[4784]: I0123 06:55:24.842716 4784 generic.go:334] "Generic (PLEG): container finished" podID="c583d3b2-20ea-459b-9887-6c8433b4b5c5" containerID="4c1716efde84e29e3c1ed4ec9fe863eec10146410d870d12a6f8bebc67cc4503" exitCode=0 Jan 23 06:55:24 crc kubenswrapper[4784]: I0123 06:55:24.842820 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdqnv" event={"ID":"c583d3b2-20ea-459b-9887-6c8433b4b5c5","Type":"ContainerDied","Data":"4c1716efde84e29e3c1ed4ec9fe863eec10146410d870d12a6f8bebc67cc4503"} Jan 23 06:55:25 crc kubenswrapper[4784]: I0123 06:55:25.860987 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="9bd4600bcba967d7f7054c915be757b108173dd1d97a02b48ff9bdbc943173d5" exitCode=0 Jan 23 06:55:25 crc kubenswrapper[4784]: I0123 06:55:25.861058 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"9bd4600bcba967d7f7054c915be757b108173dd1d97a02b48ff9bdbc943173d5"} Jan 23 06:55:25 crc kubenswrapper[4784]: I0123 06:55:25.861563 4784 scope.go:117] "RemoveContainer" containerID="5a0fb82f60dcb434892cffc24a1b2ebb531cba8f892204c23fa0e5f92ab1bd49" Jan 23 06:55:26 crc kubenswrapper[4784]: I0123 06:55:26.877789 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdqnv" event={"ID":"c583d3b2-20ea-459b-9887-6c8433b4b5c5","Type":"ContainerStarted","Data":"c583f473f3355d6c1b9a0bca8c75e90c0461c30e12013237a206e222eae39312"} Jan 23 06:55:26 crc kubenswrapper[4784]: I0123 06:55:26.888266 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3"} Jan 23 06:55:26 crc kubenswrapper[4784]: I0123 06:55:26.906543 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wdqnv" podStartSLOduration=2.76366932 podStartE2EDuration="7.906515896s" podCreationTimestamp="2026-01-23 06:55:19 +0000 UTC" firstStartedPulling="2026-01-23 06:55:20.615817227 +0000 UTC m=+2123.848325201" lastFinishedPulling="2026-01-23 06:55:25.758663793 +0000 UTC m=+2128.991171777" observedRunningTime="2026-01-23 06:55:26.900774315 +0000 UTC m=+2130.133282309" watchObservedRunningTime="2026-01-23 06:55:26.906515896 +0000 UTC m=+2130.139023870" Jan 23 06:55:29 crc kubenswrapper[4784]: I0123 06:55:29.441538 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:29 crc kubenswrapper[4784]: I0123 06:55:29.443355 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:30 crc kubenswrapper[4784]: I0123 06:55:30.500638 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wdqnv" podUID="c583d3b2-20ea-459b-9887-6c8433b4b5c5" containerName="registry-server" probeResult="failure" output=< Jan 23 06:55:30 crc kubenswrapper[4784]: timeout: failed to connect service ":50051" within 1s Jan 23 06:55:30 crc kubenswrapper[4784]: > Jan 23 06:55:36 crc kubenswrapper[4784]: I0123 06:55:36.053419 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5hq4v"] Jan 23 06:55:36 crc kubenswrapper[4784]: I0123 06:55:36.070506 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5hq4v"] Jan 23 06:55:37 crc kubenswrapper[4784]: I0123 06:55:37.130593 4784 scope.go:117] "RemoveContainer" containerID="9b526ad1247934d3f7b8cb407807dd71a97aa2df829c42d4a18f7c173c29cf56" Jan 23 06:55:37 crc kubenswrapper[4784]: I0123 06:55:37.194736 4784 scope.go:117] "RemoveContainer" containerID="b621f79d732e8d839f37db0483f5411a10f308b98c40d2b8ee777e82fd03805f" Jan 23 06:55:37 crc kubenswrapper[4784]: I0123 06:55:37.280282 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f775fdb3-12ca-4168-833d-2ae3a140ae7e" path="/var/lib/kubelet/pods/f775fdb3-12ca-4168-833d-2ae3a140ae7e/volumes" Jan 23 06:55:39 crc kubenswrapper[4784]: I0123 06:55:39.493381 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:39 crc kubenswrapper[4784]: I0123 06:55:39.546640 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:39 crc kubenswrapper[4784]: I0123 06:55:39.741252 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wdqnv"] Jan 23 06:55:41 crc kubenswrapper[4784]: I0123 06:55:41.028542 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wdqnv" podUID="c583d3b2-20ea-459b-9887-6c8433b4b5c5" containerName="registry-server" containerID="cri-o://c583f473f3355d6c1b9a0bca8c75e90c0461c30e12013237a206e222eae39312" gracePeriod=2 Jan 23 06:55:41 crc kubenswrapper[4784]: I0123 06:55:41.552654 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:41 crc kubenswrapper[4784]: I0123 06:55:41.716814 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c583d3b2-20ea-459b-9887-6c8433b4b5c5-catalog-content\") pod \"c583d3b2-20ea-459b-9887-6c8433b4b5c5\" (UID: \"c583d3b2-20ea-459b-9887-6c8433b4b5c5\") " Jan 23 06:55:41 crc kubenswrapper[4784]: I0123 06:55:41.717236 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jffrr\" (UniqueName: \"kubernetes.io/projected/c583d3b2-20ea-459b-9887-6c8433b4b5c5-kube-api-access-jffrr\") pod \"c583d3b2-20ea-459b-9887-6c8433b4b5c5\" (UID: \"c583d3b2-20ea-459b-9887-6c8433b4b5c5\") " Jan 23 06:55:41 crc kubenswrapper[4784]: I0123 06:55:41.717317 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c583d3b2-20ea-459b-9887-6c8433b4b5c5-utilities\") pod \"c583d3b2-20ea-459b-9887-6c8433b4b5c5\" (UID: \"c583d3b2-20ea-459b-9887-6c8433b4b5c5\") " Jan 23 06:55:41 crc kubenswrapper[4784]: I0123 06:55:41.719168 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c583d3b2-20ea-459b-9887-6c8433b4b5c5-utilities" (OuterVolumeSpecName: "utilities") pod "c583d3b2-20ea-459b-9887-6c8433b4b5c5" (UID: "c583d3b2-20ea-459b-9887-6c8433b4b5c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:55:41 crc kubenswrapper[4784]: I0123 06:55:41.726988 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c583d3b2-20ea-459b-9887-6c8433b4b5c5-kube-api-access-jffrr" (OuterVolumeSpecName: "kube-api-access-jffrr") pod "c583d3b2-20ea-459b-9887-6c8433b4b5c5" (UID: "c583d3b2-20ea-459b-9887-6c8433b4b5c5"). InnerVolumeSpecName "kube-api-access-jffrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:41 crc kubenswrapper[4784]: I0123 06:55:41.825288 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jffrr\" (UniqueName: \"kubernetes.io/projected/c583d3b2-20ea-459b-9887-6c8433b4b5c5-kube-api-access-jffrr\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:41 crc kubenswrapper[4784]: I0123 06:55:41.825337 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c583d3b2-20ea-459b-9887-6c8433b4b5c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:41 crc kubenswrapper[4784]: I0123 06:55:41.858004 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c583d3b2-20ea-459b-9887-6c8433b4b5c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c583d3b2-20ea-459b-9887-6c8433b4b5c5" (UID: "c583d3b2-20ea-459b-9887-6c8433b4b5c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:55:41 crc kubenswrapper[4784]: I0123 06:55:41.927950 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c583d3b2-20ea-459b-9887-6c8433b4b5c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:42 crc kubenswrapper[4784]: I0123 06:55:42.041277 4784 generic.go:334] "Generic (PLEG): container finished" podID="c583d3b2-20ea-459b-9887-6c8433b4b5c5" containerID="c583f473f3355d6c1b9a0bca8c75e90c0461c30e12013237a206e222eae39312" exitCode=0 Jan 23 06:55:42 crc kubenswrapper[4784]: I0123 06:55:42.041458 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdqnv" event={"ID":"c583d3b2-20ea-459b-9887-6c8433b4b5c5","Type":"ContainerDied","Data":"c583f473f3355d6c1b9a0bca8c75e90c0461c30e12013237a206e222eae39312"} Jan 23 06:55:42 crc kubenswrapper[4784]: I0123 06:55:42.041717 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdqnv" event={"ID":"c583d3b2-20ea-459b-9887-6c8433b4b5c5","Type":"ContainerDied","Data":"acb2626ffa460c4fa1823c2a544ac6eb36b6d0ef0d5f9606882d54134e287bda"} Jan 23 06:55:42 crc kubenswrapper[4784]: I0123 06:55:42.041582 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdqnv" Jan 23 06:55:42 crc kubenswrapper[4784]: I0123 06:55:42.041769 4784 scope.go:117] "RemoveContainer" containerID="c583f473f3355d6c1b9a0bca8c75e90c0461c30e12013237a206e222eae39312" Jan 23 06:55:42 crc kubenswrapper[4784]: I0123 06:55:42.072035 4784 scope.go:117] "RemoveContainer" containerID="4c1716efde84e29e3c1ed4ec9fe863eec10146410d870d12a6f8bebc67cc4503" Jan 23 06:55:42 crc kubenswrapper[4784]: I0123 06:55:42.082524 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wdqnv"] Jan 23 06:55:42 crc kubenswrapper[4784]: I0123 06:55:42.095331 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wdqnv"] Jan 23 06:55:42 crc kubenswrapper[4784]: I0123 06:55:42.105547 4784 scope.go:117] "RemoveContainer" containerID="9c5d0f841fd9052aa874b575f5440b367304ec4810b70245197a0b4b3186fe6f" Jan 23 06:55:42 crc kubenswrapper[4784]: I0123 06:55:42.165844 4784 scope.go:117] "RemoveContainer" containerID="c583f473f3355d6c1b9a0bca8c75e90c0461c30e12013237a206e222eae39312" Jan 23 06:55:42 crc kubenswrapper[4784]: E0123 06:55:42.166582 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c583f473f3355d6c1b9a0bca8c75e90c0461c30e12013237a206e222eae39312\": container with ID starting with c583f473f3355d6c1b9a0bca8c75e90c0461c30e12013237a206e222eae39312 not found: ID does not exist" containerID="c583f473f3355d6c1b9a0bca8c75e90c0461c30e12013237a206e222eae39312" Jan 23 06:55:42 crc kubenswrapper[4784]: I0123 06:55:42.166791 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c583f473f3355d6c1b9a0bca8c75e90c0461c30e12013237a206e222eae39312"} err="failed to get container status \"c583f473f3355d6c1b9a0bca8c75e90c0461c30e12013237a206e222eae39312\": rpc error: code = NotFound desc = could not find container \"c583f473f3355d6c1b9a0bca8c75e90c0461c30e12013237a206e222eae39312\": container with ID starting with c583f473f3355d6c1b9a0bca8c75e90c0461c30e12013237a206e222eae39312 not found: ID does not exist" Jan 23 06:55:42 crc kubenswrapper[4784]: I0123 06:55:42.166829 4784 scope.go:117] "RemoveContainer" containerID="4c1716efde84e29e3c1ed4ec9fe863eec10146410d870d12a6f8bebc67cc4503" Jan 23 06:55:42 crc kubenswrapper[4784]: E0123 06:55:42.167345 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c1716efde84e29e3c1ed4ec9fe863eec10146410d870d12a6f8bebc67cc4503\": container with ID starting with 4c1716efde84e29e3c1ed4ec9fe863eec10146410d870d12a6f8bebc67cc4503 not found: ID does not exist" containerID="4c1716efde84e29e3c1ed4ec9fe863eec10146410d870d12a6f8bebc67cc4503" Jan 23 06:55:42 crc kubenswrapper[4784]: I0123 06:55:42.167572 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c1716efde84e29e3c1ed4ec9fe863eec10146410d870d12a6f8bebc67cc4503"} err="failed to get container status \"4c1716efde84e29e3c1ed4ec9fe863eec10146410d870d12a6f8bebc67cc4503\": rpc error: code = NotFound desc = could not find container \"4c1716efde84e29e3c1ed4ec9fe863eec10146410d870d12a6f8bebc67cc4503\": container with ID starting with 4c1716efde84e29e3c1ed4ec9fe863eec10146410d870d12a6f8bebc67cc4503 not found: ID does not exist" Jan 23 06:55:42 crc kubenswrapper[4784]: I0123 06:55:42.167677 4784 scope.go:117] "RemoveContainer" containerID="9c5d0f841fd9052aa874b575f5440b367304ec4810b70245197a0b4b3186fe6f" Jan 23 06:55:42 crc kubenswrapper[4784]: E0123 06:55:42.168314 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c5d0f841fd9052aa874b575f5440b367304ec4810b70245197a0b4b3186fe6f\": container with ID starting with 9c5d0f841fd9052aa874b575f5440b367304ec4810b70245197a0b4b3186fe6f not found: ID does not exist" containerID="9c5d0f841fd9052aa874b575f5440b367304ec4810b70245197a0b4b3186fe6f" Jan 23 06:55:42 crc kubenswrapper[4784]: I0123 06:55:42.168358 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c5d0f841fd9052aa874b575f5440b367304ec4810b70245197a0b4b3186fe6f"} err="failed to get container status \"9c5d0f841fd9052aa874b575f5440b367304ec4810b70245197a0b4b3186fe6f\": rpc error: code = NotFound desc = could not find container \"9c5d0f841fd9052aa874b575f5440b367304ec4810b70245197a0b4b3186fe6f\": container with ID starting with 9c5d0f841fd9052aa874b575f5440b367304ec4810b70245197a0b4b3186fe6f not found: ID does not exist" Jan 23 06:55:43 crc kubenswrapper[4784]: I0123 06:55:43.268544 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c583d3b2-20ea-459b-9887-6c8433b4b5c5" path="/var/lib/kubelet/pods/c583d3b2-20ea-459b-9887-6c8433b4b5c5/volumes" Jan 23 06:55:51 crc kubenswrapper[4784]: I0123 06:55:51.167491 4784 generic.go:334] "Generic (PLEG): container finished" podID="0402642d-23da-49d9-9175-8bff0326b7fd" containerID="163b9134cd04b17cc4d77e359eeff351fdc01957926b78fe3a6948f6e7eedbb5" exitCode=0 Jan 23 06:55:51 crc kubenswrapper[4784]: I0123 06:55:51.168231 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" event={"ID":"0402642d-23da-49d9-9175-8bff0326b7fd","Type":"ContainerDied","Data":"163b9134cd04b17cc4d77e359eeff351fdc01957926b78fe3a6948f6e7eedbb5"} Jan 23 06:55:52 crc kubenswrapper[4784]: I0123 06:55:52.701698 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" Jan 23 06:55:52 crc kubenswrapper[4784]: I0123 06:55:52.814435 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0402642d-23da-49d9-9175-8bff0326b7fd-ssh-key-openstack-edpm-ipam\") pod \"0402642d-23da-49d9-9175-8bff0326b7fd\" (UID: \"0402642d-23da-49d9-9175-8bff0326b7fd\") " Jan 23 06:55:52 crc kubenswrapper[4784]: I0123 06:55:52.814846 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkptb\" (UniqueName: \"kubernetes.io/projected/0402642d-23da-49d9-9175-8bff0326b7fd-kube-api-access-fkptb\") pod \"0402642d-23da-49d9-9175-8bff0326b7fd\" (UID: \"0402642d-23da-49d9-9175-8bff0326b7fd\") " Jan 23 06:55:52 crc kubenswrapper[4784]: I0123 06:55:52.814964 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0402642d-23da-49d9-9175-8bff0326b7fd-inventory\") pod \"0402642d-23da-49d9-9175-8bff0326b7fd\" (UID: \"0402642d-23da-49d9-9175-8bff0326b7fd\") " Jan 23 06:55:52 crc kubenswrapper[4784]: I0123 06:55:52.822586 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0402642d-23da-49d9-9175-8bff0326b7fd-kube-api-access-fkptb" (OuterVolumeSpecName: "kube-api-access-fkptb") pod "0402642d-23da-49d9-9175-8bff0326b7fd" (UID: "0402642d-23da-49d9-9175-8bff0326b7fd"). InnerVolumeSpecName "kube-api-access-fkptb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:52 crc kubenswrapper[4784]: I0123 06:55:52.860959 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0402642d-23da-49d9-9175-8bff0326b7fd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0402642d-23da-49d9-9175-8bff0326b7fd" (UID: "0402642d-23da-49d9-9175-8bff0326b7fd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:52 crc kubenswrapper[4784]: I0123 06:55:52.863003 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0402642d-23da-49d9-9175-8bff0326b7fd-inventory" (OuterVolumeSpecName: "inventory") pod "0402642d-23da-49d9-9175-8bff0326b7fd" (UID: "0402642d-23da-49d9-9175-8bff0326b7fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:52 crc kubenswrapper[4784]: I0123 06:55:52.922220 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkptb\" (UniqueName: \"kubernetes.io/projected/0402642d-23da-49d9-9175-8bff0326b7fd-kube-api-access-fkptb\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:52 crc kubenswrapper[4784]: I0123 06:55:52.922264 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0402642d-23da-49d9-9175-8bff0326b7fd-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:52 crc kubenswrapper[4784]: I0123 06:55:52.922276 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0402642d-23da-49d9-9175-8bff0326b7fd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.199883 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" event={"ID":"0402642d-23da-49d9-9175-8bff0326b7fd","Type":"ContainerDied","Data":"e10b9528fecc05ea52880b45d96ae7d51c2e3860f45ef2385e3c5d3e0cf866fb"} Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.199952 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e10b9528fecc05ea52880b45d96ae7d51c2e3860f45ef2385e3c5d3e0cf866fb" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.199975 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4vzgt" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.300459 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2"] Jan 23 06:55:53 crc kubenswrapper[4784]: E0123 06:55:53.301283 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c583d3b2-20ea-459b-9887-6c8433b4b5c5" containerName="registry-server" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.301311 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c583d3b2-20ea-459b-9887-6c8433b4b5c5" containerName="registry-server" Jan 23 06:55:53 crc kubenswrapper[4784]: E0123 06:55:53.301332 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c583d3b2-20ea-459b-9887-6c8433b4b5c5" containerName="extract-content" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.301344 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c583d3b2-20ea-459b-9887-6c8433b4b5c5" containerName="extract-content" Jan 23 06:55:53 crc kubenswrapper[4784]: E0123 06:55:53.301372 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c583d3b2-20ea-459b-9887-6c8433b4b5c5" containerName="extract-utilities" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.301382 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c583d3b2-20ea-459b-9887-6c8433b4b5c5" containerName="extract-utilities" Jan 23 06:55:53 crc kubenswrapper[4784]: E0123 06:55:53.301426 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0402642d-23da-49d9-9175-8bff0326b7fd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.301437 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0402642d-23da-49d9-9175-8bff0326b7fd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.301690 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c583d3b2-20ea-459b-9887-6c8433b4b5c5" containerName="registry-server" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.301706 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0402642d-23da-49d9-9175-8bff0326b7fd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.302736 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.304985 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.305848 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.306229 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.312456 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.323306 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2"] Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.333307 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2\" (UID: \"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.333402 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6vpm\" (UniqueName: \"kubernetes.io/projected/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-kube-api-access-j6vpm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2\" (UID: \"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.333580 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2\" (UID: \"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.437017 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2\" (UID: \"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.437459 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6vpm\" (UniqueName: \"kubernetes.io/projected/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-kube-api-access-j6vpm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2\" (UID: \"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.437580 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2\" (UID: \"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.444057 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2\" (UID: \"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.444436 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2\" (UID: \"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.460821 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6vpm\" (UniqueName: \"kubernetes.io/projected/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-kube-api-access-j6vpm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2\" (UID: \"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" Jan 23 06:55:53 crc kubenswrapper[4784]: I0123 06:55:53.624226 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" Jan 23 06:55:54 crc kubenswrapper[4784]: I0123 06:55:54.188314 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2"] Jan 23 06:55:54 crc kubenswrapper[4784]: I0123 06:55:54.228207 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" event={"ID":"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead","Type":"ContainerStarted","Data":"d64938ee4ad6cacb55afaa3a88712df32b593ab8fe1c24297aadd8a7cb149bb3"} Jan 23 06:55:55 crc kubenswrapper[4784]: I0123 06:55:55.241454 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" event={"ID":"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead","Type":"ContainerStarted","Data":"2070e7191e606871a27f8abe41f5c399f06929d35292bea3d656b820c1154e90"} Jan 23 06:55:55 crc kubenswrapper[4784]: I0123 06:55:55.277372 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" podStartSLOduration=1.722189566 podStartE2EDuration="2.277339932s" podCreationTimestamp="2026-01-23 06:55:53 +0000 UTC" firstStartedPulling="2026-01-23 06:55:54.194350723 +0000 UTC m=+2157.426858687" lastFinishedPulling="2026-01-23 06:55:54.749501079 +0000 UTC m=+2157.982009053" observedRunningTime="2026-01-23 06:55:55.267780554 +0000 UTC m=+2158.500288528" watchObservedRunningTime="2026-01-23 06:55:55.277339932 +0000 UTC m=+2158.509847906" Jan 23 06:56:19 crc kubenswrapper[4784]: I0123 06:56:19.135206 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w6ghr"] Jan 23 06:56:19 crc kubenswrapper[4784]: I0123 06:56:19.141711 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:19 crc kubenswrapper[4784]: I0123 06:56:19.171498 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6ghr"] Jan 23 06:56:19 crc kubenswrapper[4784]: I0123 06:56:19.246539 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c68828-537c-4f70-abe4-450afa830b4c-catalog-content\") pod \"certified-operators-w6ghr\" (UID: \"c7c68828-537c-4f70-abe4-450afa830b4c\") " pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:19 crc kubenswrapper[4784]: I0123 06:56:19.246891 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfkt6\" (UniqueName: \"kubernetes.io/projected/c7c68828-537c-4f70-abe4-450afa830b4c-kube-api-access-kfkt6\") pod \"certified-operators-w6ghr\" (UID: \"c7c68828-537c-4f70-abe4-450afa830b4c\") " pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:19 crc kubenswrapper[4784]: I0123 06:56:19.246996 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c68828-537c-4f70-abe4-450afa830b4c-utilities\") pod \"certified-operators-w6ghr\" (UID: \"c7c68828-537c-4f70-abe4-450afa830b4c\") " pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:19 crc kubenswrapper[4784]: I0123 06:56:19.349060 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c68828-537c-4f70-abe4-450afa830b4c-catalog-content\") pod \"certified-operators-w6ghr\" (UID: \"c7c68828-537c-4f70-abe4-450afa830b4c\") " pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:19 crc kubenswrapper[4784]: I0123 06:56:19.349232 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfkt6\" (UniqueName: \"kubernetes.io/projected/c7c68828-537c-4f70-abe4-450afa830b4c-kube-api-access-kfkt6\") pod \"certified-operators-w6ghr\" (UID: \"c7c68828-537c-4f70-abe4-450afa830b4c\") " pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:19 crc kubenswrapper[4784]: I0123 06:56:19.349299 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c68828-537c-4f70-abe4-450afa830b4c-utilities\") pod \"certified-operators-w6ghr\" (UID: \"c7c68828-537c-4f70-abe4-450afa830b4c\") " pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:19 crc kubenswrapper[4784]: I0123 06:56:19.350310 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c68828-537c-4f70-abe4-450afa830b4c-catalog-content\") pod \"certified-operators-w6ghr\" (UID: \"c7c68828-537c-4f70-abe4-450afa830b4c\") " pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:19 crc kubenswrapper[4784]: I0123 06:56:19.350593 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c68828-537c-4f70-abe4-450afa830b4c-utilities\") pod \"certified-operators-w6ghr\" (UID: \"c7c68828-537c-4f70-abe4-450afa830b4c\") " pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:19 crc kubenswrapper[4784]: I0123 06:56:19.377421 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfkt6\" (UniqueName: \"kubernetes.io/projected/c7c68828-537c-4f70-abe4-450afa830b4c-kube-api-access-kfkt6\") pod \"certified-operators-w6ghr\" (UID: \"c7c68828-537c-4f70-abe4-450afa830b4c\") " pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:19 crc kubenswrapper[4784]: I0123 06:56:19.479933 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:20 crc kubenswrapper[4784]: I0123 06:56:20.053333 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6ghr"] Jan 23 06:56:20 crc kubenswrapper[4784]: I0123 06:56:20.507638 4784 generic.go:334] "Generic (PLEG): container finished" podID="c7c68828-537c-4f70-abe4-450afa830b4c" containerID="158bfde1bbc784e6ce4a8ccce3ada4d24bc71204533827e89a0ad339ffcaa251" exitCode=0 Jan 23 06:56:20 crc kubenswrapper[4784]: I0123 06:56:20.507776 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6ghr" event={"ID":"c7c68828-537c-4f70-abe4-450afa830b4c","Type":"ContainerDied","Data":"158bfde1bbc784e6ce4a8ccce3ada4d24bc71204533827e89a0ad339ffcaa251"} Jan 23 06:56:20 crc kubenswrapper[4784]: I0123 06:56:20.508101 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6ghr" event={"ID":"c7c68828-537c-4f70-abe4-450afa830b4c","Type":"ContainerStarted","Data":"d834daad3895a04672611a294553af5e27e112c88e18718617d3554cd0434dc2"} Jan 23 06:56:22 crc kubenswrapper[4784]: I0123 06:56:22.532574 4784 generic.go:334] "Generic (PLEG): container finished" podID="c7c68828-537c-4f70-abe4-450afa830b4c" containerID="ed55eafdf286b6560ea9bdfd092f872a2082923f337ab13a6b3f434ef3264cd5" exitCode=0 Jan 23 06:56:22 crc kubenswrapper[4784]: I0123 06:56:22.532725 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6ghr" event={"ID":"c7c68828-537c-4f70-abe4-450afa830b4c","Type":"ContainerDied","Data":"ed55eafdf286b6560ea9bdfd092f872a2082923f337ab13a6b3f434ef3264cd5"} Jan 23 06:56:23 crc kubenswrapper[4784]: I0123 06:56:23.547536 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6ghr" event={"ID":"c7c68828-537c-4f70-abe4-450afa830b4c","Type":"ContainerStarted","Data":"141515982e330c792fad914930588049915a713f78ca154355d705f8ec4c678d"} Jan 23 06:56:23 crc kubenswrapper[4784]: I0123 06:56:23.585609 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w6ghr" podStartSLOduration=2.107338249 podStartE2EDuration="4.585566823s" podCreationTimestamp="2026-01-23 06:56:19 +0000 UTC" firstStartedPulling="2026-01-23 06:56:20.50991917 +0000 UTC m=+2183.742427144" lastFinishedPulling="2026-01-23 06:56:22.988147754 +0000 UTC m=+2186.220655718" observedRunningTime="2026-01-23 06:56:23.572433127 +0000 UTC m=+2186.804941101" watchObservedRunningTime="2026-01-23 06:56:23.585566823 +0000 UTC m=+2186.818074797" Jan 23 06:56:29 crc kubenswrapper[4784]: I0123 06:56:29.480913 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:29 crc kubenswrapper[4784]: I0123 06:56:29.481850 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:29 crc kubenswrapper[4784]: I0123 06:56:29.535856 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:29 crc kubenswrapper[4784]: I0123 06:56:29.670235 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:29 crc kubenswrapper[4784]: I0123 06:56:29.853968 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6ghr"] Jan 23 06:56:31 crc kubenswrapper[4784]: I0123 06:56:31.631576 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w6ghr" podUID="c7c68828-537c-4f70-abe4-450afa830b4c" containerName="registry-server" containerID="cri-o://141515982e330c792fad914930588049915a713f78ca154355d705f8ec4c678d" gracePeriod=2 Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.371277 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.413637 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c68828-537c-4f70-abe4-450afa830b4c-catalog-content\") pod \"c7c68828-537c-4f70-abe4-450afa830b4c\" (UID: \"c7c68828-537c-4f70-abe4-450afa830b4c\") " Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.414127 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c68828-537c-4f70-abe4-450afa830b4c-utilities\") pod \"c7c68828-537c-4f70-abe4-450afa830b4c\" (UID: \"c7c68828-537c-4f70-abe4-450afa830b4c\") " Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.414237 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfkt6\" (UniqueName: \"kubernetes.io/projected/c7c68828-537c-4f70-abe4-450afa830b4c-kube-api-access-kfkt6\") pod \"c7c68828-537c-4f70-abe4-450afa830b4c\" (UID: \"c7c68828-537c-4f70-abe4-450afa830b4c\") " Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.417416 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7c68828-537c-4f70-abe4-450afa830b4c-utilities" (OuterVolumeSpecName: "utilities") pod "c7c68828-537c-4f70-abe4-450afa830b4c" (UID: "c7c68828-537c-4f70-abe4-450afa830b4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.458560 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7c68828-537c-4f70-abe4-450afa830b4c-kube-api-access-kfkt6" (OuterVolumeSpecName: "kube-api-access-kfkt6") pod "c7c68828-537c-4f70-abe4-450afa830b4c" (UID: "c7c68828-537c-4f70-abe4-450afa830b4c"). InnerVolumeSpecName "kube-api-access-kfkt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.472192 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7c68828-537c-4f70-abe4-450afa830b4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7c68828-537c-4f70-abe4-450afa830b4c" (UID: "c7c68828-537c-4f70-abe4-450afa830b4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.517276 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfkt6\" (UniqueName: \"kubernetes.io/projected/c7c68828-537c-4f70-abe4-450afa830b4c-kube-api-access-kfkt6\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.517662 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c68828-537c-4f70-abe4-450afa830b4c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.517848 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c68828-537c-4f70-abe4-450afa830b4c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.647974 4784 generic.go:334] "Generic (PLEG): container finished" podID="c7c68828-537c-4f70-abe4-450afa830b4c" containerID="141515982e330c792fad914930588049915a713f78ca154355d705f8ec4c678d" exitCode=0 Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.648025 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6ghr" event={"ID":"c7c68828-537c-4f70-abe4-450afa830b4c","Type":"ContainerDied","Data":"141515982e330c792fad914930588049915a713f78ca154355d705f8ec4c678d"} Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.648065 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6ghr" event={"ID":"c7c68828-537c-4f70-abe4-450afa830b4c","Type":"ContainerDied","Data":"d834daad3895a04672611a294553af5e27e112c88e18718617d3554cd0434dc2"} Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.648088 4784 scope.go:117] "RemoveContainer" containerID="141515982e330c792fad914930588049915a713f78ca154355d705f8ec4c678d" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.648172 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6ghr" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.687832 4784 scope.go:117] "RemoveContainer" containerID="ed55eafdf286b6560ea9bdfd092f872a2082923f337ab13a6b3f434ef3264cd5" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.700057 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6ghr"] Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.711481 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w6ghr"] Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.729671 4784 scope.go:117] "RemoveContainer" containerID="158bfde1bbc784e6ce4a8ccce3ada4d24bc71204533827e89a0ad339ffcaa251" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.758078 4784 scope.go:117] "RemoveContainer" containerID="141515982e330c792fad914930588049915a713f78ca154355d705f8ec4c678d" Jan 23 06:56:32 crc kubenswrapper[4784]: E0123 06:56:32.759091 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"141515982e330c792fad914930588049915a713f78ca154355d705f8ec4c678d\": container with ID starting with 141515982e330c792fad914930588049915a713f78ca154355d705f8ec4c678d not found: ID does not exist" containerID="141515982e330c792fad914930588049915a713f78ca154355d705f8ec4c678d" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.759168 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"141515982e330c792fad914930588049915a713f78ca154355d705f8ec4c678d"} err="failed to get container status \"141515982e330c792fad914930588049915a713f78ca154355d705f8ec4c678d\": rpc error: code = NotFound desc = could not find container \"141515982e330c792fad914930588049915a713f78ca154355d705f8ec4c678d\": container with ID starting with 141515982e330c792fad914930588049915a713f78ca154355d705f8ec4c678d not found: ID does not exist" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.759231 4784 scope.go:117] "RemoveContainer" containerID="ed55eafdf286b6560ea9bdfd092f872a2082923f337ab13a6b3f434ef3264cd5" Jan 23 06:56:32 crc kubenswrapper[4784]: E0123 06:56:32.761185 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed55eafdf286b6560ea9bdfd092f872a2082923f337ab13a6b3f434ef3264cd5\": container with ID starting with ed55eafdf286b6560ea9bdfd092f872a2082923f337ab13a6b3f434ef3264cd5 not found: ID does not exist" containerID="ed55eafdf286b6560ea9bdfd092f872a2082923f337ab13a6b3f434ef3264cd5" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.761213 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed55eafdf286b6560ea9bdfd092f872a2082923f337ab13a6b3f434ef3264cd5"} err="failed to get container status \"ed55eafdf286b6560ea9bdfd092f872a2082923f337ab13a6b3f434ef3264cd5\": rpc error: code = NotFound desc = could not find container \"ed55eafdf286b6560ea9bdfd092f872a2082923f337ab13a6b3f434ef3264cd5\": container with ID starting with ed55eafdf286b6560ea9bdfd092f872a2082923f337ab13a6b3f434ef3264cd5 not found: ID does not exist" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.761232 4784 scope.go:117] "RemoveContainer" containerID="158bfde1bbc784e6ce4a8ccce3ada4d24bc71204533827e89a0ad339ffcaa251" Jan 23 06:56:32 crc kubenswrapper[4784]: E0123 06:56:32.761985 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"158bfde1bbc784e6ce4a8ccce3ada4d24bc71204533827e89a0ad339ffcaa251\": container with ID starting with 158bfde1bbc784e6ce4a8ccce3ada4d24bc71204533827e89a0ad339ffcaa251 not found: ID does not exist" containerID="158bfde1bbc784e6ce4a8ccce3ada4d24bc71204533827e89a0ad339ffcaa251" Jan 23 06:56:32 crc kubenswrapper[4784]: I0123 06:56:32.762026 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158bfde1bbc784e6ce4a8ccce3ada4d24bc71204533827e89a0ad339ffcaa251"} err="failed to get container status \"158bfde1bbc784e6ce4a8ccce3ada4d24bc71204533827e89a0ad339ffcaa251\": rpc error: code = NotFound desc = could not find container \"158bfde1bbc784e6ce4a8ccce3ada4d24bc71204533827e89a0ad339ffcaa251\": container with ID starting with 158bfde1bbc784e6ce4a8ccce3ada4d24bc71204533827e89a0ad339ffcaa251 not found: ID does not exist" Jan 23 06:56:33 crc kubenswrapper[4784]: I0123 06:56:33.269344 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7c68828-537c-4f70-abe4-450afa830b4c" path="/var/lib/kubelet/pods/c7c68828-537c-4f70-abe4-450afa830b4c/volumes" Jan 23 06:56:37 crc kubenswrapper[4784]: I0123 06:56:37.347854 4784 scope.go:117] "RemoveContainer" containerID="04b65425da2f85022c86789a01498790868c6d97149dc8744c68abb012db3825" Jan 23 06:56:52 crc kubenswrapper[4784]: I0123 06:56:52.920328 4784 generic.go:334] "Generic (PLEG): container finished" podID="f202f8c4-5d8c-4cca-a9f6-ebf39f16cead" containerID="2070e7191e606871a27f8abe41f5c399f06929d35292bea3d656b820c1154e90" exitCode=0 Jan 23 06:56:52 crc kubenswrapper[4784]: I0123 06:56:52.920406 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" event={"ID":"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead","Type":"ContainerDied","Data":"2070e7191e606871a27f8abe41f5c399f06929d35292bea3d656b820c1154e90"} Jan 23 06:56:54 crc kubenswrapper[4784]: I0123 06:56:54.466838 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" Jan 23 06:56:54 crc kubenswrapper[4784]: I0123 06:56:54.563256 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-ssh-key-openstack-edpm-ipam\") pod \"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead\" (UID: \"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead\") " Jan 23 06:56:54 crc kubenswrapper[4784]: I0123 06:56:54.563878 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-inventory\") pod \"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead\" (UID: \"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead\") " Jan 23 06:56:54 crc kubenswrapper[4784]: I0123 06:56:54.564330 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6vpm\" (UniqueName: \"kubernetes.io/projected/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-kube-api-access-j6vpm\") pod \"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead\" (UID: \"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead\") " Jan 23 06:56:54 crc kubenswrapper[4784]: I0123 06:56:54.570430 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-kube-api-access-j6vpm" (OuterVolumeSpecName: "kube-api-access-j6vpm") pod "f202f8c4-5d8c-4cca-a9f6-ebf39f16cead" (UID: "f202f8c4-5d8c-4cca-a9f6-ebf39f16cead"). InnerVolumeSpecName "kube-api-access-j6vpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:54 crc kubenswrapper[4784]: I0123 06:56:54.597280 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f202f8c4-5d8c-4cca-a9f6-ebf39f16cead" (UID: "f202f8c4-5d8c-4cca-a9f6-ebf39f16cead"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:54 crc kubenswrapper[4784]: I0123 06:56:54.601365 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-inventory" (OuterVolumeSpecName: "inventory") pod "f202f8c4-5d8c-4cca-a9f6-ebf39f16cead" (UID: "f202f8c4-5d8c-4cca-a9f6-ebf39f16cead"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:54 crc kubenswrapper[4784]: I0123 06:56:54.668137 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:54 crc kubenswrapper[4784]: I0123 06:56:54.668177 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6vpm\" (UniqueName: \"kubernetes.io/projected/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-kube-api-access-j6vpm\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:54 crc kubenswrapper[4784]: I0123 06:56:54.668193 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f202f8c4-5d8c-4cca-a9f6-ebf39f16cead-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:54 crc kubenswrapper[4784]: I0123 06:56:54.943980 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" event={"ID":"f202f8c4-5d8c-4cca-a9f6-ebf39f16cead","Type":"ContainerDied","Data":"d64938ee4ad6cacb55afaa3a88712df32b593ab8fe1c24297aadd8a7cb149bb3"} Jan 23 06:56:54 crc kubenswrapper[4784]: I0123 06:56:54.944044 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2" Jan 23 06:56:54 crc kubenswrapper[4784]: I0123 06:56:54.944052 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d64938ee4ad6cacb55afaa3a88712df32b593ab8fe1c24297aadd8a7cb149bb3" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.058317 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zfrjd"] Jan 23 06:56:55 crc kubenswrapper[4784]: E0123 06:56:55.058970 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c68828-537c-4f70-abe4-450afa830b4c" containerName="extract-utilities" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.058996 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c68828-537c-4f70-abe4-450afa830b4c" containerName="extract-utilities" Jan 23 06:56:55 crc kubenswrapper[4784]: E0123 06:56:55.059016 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c68828-537c-4f70-abe4-450afa830b4c" containerName="registry-server" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.059026 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c68828-537c-4f70-abe4-450afa830b4c" containerName="registry-server" Jan 23 06:56:55 crc kubenswrapper[4784]: E0123 06:56:55.059048 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f202f8c4-5d8c-4cca-a9f6-ebf39f16cead" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.059059 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f202f8c4-5d8c-4cca-a9f6-ebf39f16cead" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 06:56:55 crc kubenswrapper[4784]: E0123 06:56:55.059081 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c68828-537c-4f70-abe4-450afa830b4c" containerName="extract-content" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.059088 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c68828-537c-4f70-abe4-450afa830b4c" containerName="extract-content" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.059358 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7c68828-537c-4f70-abe4-450afa830b4c" containerName="registry-server" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.059378 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f202f8c4-5d8c-4cca-a9f6-ebf39f16cead" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.060484 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.073410 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.073633 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.074849 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.074926 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.130837 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zfrjd"] Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.185212 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0546d855-6190-43a0-8fd3-7897c1c9dc80-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zfrjd\" (UID: \"0546d855-6190-43a0-8fd3-7897c1c9dc80\") " pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.186121 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0546d855-6190-43a0-8fd3-7897c1c9dc80-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zfrjd\" (UID: \"0546d855-6190-43a0-8fd3-7897c1c9dc80\") " pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.186156 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-757jw\" (UniqueName: \"kubernetes.io/projected/0546d855-6190-43a0-8fd3-7897c1c9dc80-kube-api-access-757jw\") pod \"ssh-known-hosts-edpm-deployment-zfrjd\" (UID: \"0546d855-6190-43a0-8fd3-7897c1c9dc80\") " pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.289133 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0546d855-6190-43a0-8fd3-7897c1c9dc80-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zfrjd\" (UID: \"0546d855-6190-43a0-8fd3-7897c1c9dc80\") " pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.289192 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-757jw\" (UniqueName: \"kubernetes.io/projected/0546d855-6190-43a0-8fd3-7897c1c9dc80-kube-api-access-757jw\") pod \"ssh-known-hosts-edpm-deployment-zfrjd\" (UID: \"0546d855-6190-43a0-8fd3-7897c1c9dc80\") " pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.289342 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0546d855-6190-43a0-8fd3-7897c1c9dc80-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zfrjd\" (UID: \"0546d855-6190-43a0-8fd3-7897c1c9dc80\") " pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.295648 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0546d855-6190-43a0-8fd3-7897c1c9dc80-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zfrjd\" (UID: \"0546d855-6190-43a0-8fd3-7897c1c9dc80\") " pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.296637 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0546d855-6190-43a0-8fd3-7897c1c9dc80-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zfrjd\" (UID: \"0546d855-6190-43a0-8fd3-7897c1c9dc80\") " pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.313137 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-757jw\" (UniqueName: \"kubernetes.io/projected/0546d855-6190-43a0-8fd3-7897c1c9dc80-kube-api-access-757jw\") pod \"ssh-known-hosts-edpm-deployment-zfrjd\" (UID: \"0546d855-6190-43a0-8fd3-7897c1c9dc80\") " pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" Jan 23 06:56:55 crc kubenswrapper[4784]: I0123 06:56:55.383520 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" Jan 23 06:56:56 crc kubenswrapper[4784]: I0123 06:56:56.217294 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zfrjd"] Jan 23 06:56:56 crc kubenswrapper[4784]: I0123 06:56:56.966015 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" event={"ID":"0546d855-6190-43a0-8fd3-7897c1c9dc80","Type":"ContainerStarted","Data":"af76b0f3f83ac69bd9e77c8b22eefa2effb8a55f247b6650f2fc3c41a262244b"} Jan 23 06:56:57 crc kubenswrapper[4784]: I0123 06:56:57.979811 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" event={"ID":"0546d855-6190-43a0-8fd3-7897c1c9dc80","Type":"ContainerStarted","Data":"7915de06497296e6d4005237c885ca384b536b26dc04635975cf0e110e8272be"} Jan 23 06:56:58 crc kubenswrapper[4784]: I0123 06:56:58.008593 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" podStartSLOduration=2.053174158 podStartE2EDuration="3.008567392s" podCreationTimestamp="2026-01-23 06:56:55 +0000 UTC" firstStartedPulling="2026-01-23 06:56:56.225963106 +0000 UTC m=+2219.458471080" lastFinishedPulling="2026-01-23 06:56:57.18135633 +0000 UTC m=+2220.413864314" observedRunningTime="2026-01-23 06:56:58.000889981 +0000 UTC m=+2221.233397965" watchObservedRunningTime="2026-01-23 06:56:58.008567392 +0000 UTC m=+2221.241075366" Jan 23 06:57:06 crc kubenswrapper[4784]: I0123 06:57:06.075984 4784 generic.go:334] "Generic (PLEG): container finished" podID="0546d855-6190-43a0-8fd3-7897c1c9dc80" containerID="7915de06497296e6d4005237c885ca384b536b26dc04635975cf0e110e8272be" exitCode=0 Jan 23 06:57:06 crc kubenswrapper[4784]: I0123 06:57:06.076086 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" event={"ID":"0546d855-6190-43a0-8fd3-7897c1c9dc80","Type":"ContainerDied","Data":"7915de06497296e6d4005237c885ca384b536b26dc04635975cf0e110e8272be"} Jan 23 06:57:07 crc kubenswrapper[4784]: I0123 06:57:07.651032 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" Jan 23 06:57:07 crc kubenswrapper[4784]: I0123 06:57:07.785961 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0546d855-6190-43a0-8fd3-7897c1c9dc80-ssh-key-openstack-edpm-ipam\") pod \"0546d855-6190-43a0-8fd3-7897c1c9dc80\" (UID: \"0546d855-6190-43a0-8fd3-7897c1c9dc80\") " Jan 23 06:57:07 crc kubenswrapper[4784]: I0123 06:57:07.786309 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-757jw\" (UniqueName: \"kubernetes.io/projected/0546d855-6190-43a0-8fd3-7897c1c9dc80-kube-api-access-757jw\") pod \"0546d855-6190-43a0-8fd3-7897c1c9dc80\" (UID: \"0546d855-6190-43a0-8fd3-7897c1c9dc80\") " Jan 23 06:57:07 crc kubenswrapper[4784]: I0123 06:57:07.786386 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0546d855-6190-43a0-8fd3-7897c1c9dc80-inventory-0\") pod \"0546d855-6190-43a0-8fd3-7897c1c9dc80\" (UID: \"0546d855-6190-43a0-8fd3-7897c1c9dc80\") " Jan 23 06:57:07 crc kubenswrapper[4784]: I0123 06:57:07.800182 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0546d855-6190-43a0-8fd3-7897c1c9dc80-kube-api-access-757jw" (OuterVolumeSpecName: "kube-api-access-757jw") pod "0546d855-6190-43a0-8fd3-7897c1c9dc80" (UID: "0546d855-6190-43a0-8fd3-7897c1c9dc80"). InnerVolumeSpecName "kube-api-access-757jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:07 crc kubenswrapper[4784]: I0123 06:57:07.824075 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0546d855-6190-43a0-8fd3-7897c1c9dc80-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "0546d855-6190-43a0-8fd3-7897c1c9dc80" (UID: "0546d855-6190-43a0-8fd3-7897c1c9dc80"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:07 crc kubenswrapper[4784]: I0123 06:57:07.826932 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0546d855-6190-43a0-8fd3-7897c1c9dc80-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0546d855-6190-43a0-8fd3-7897c1c9dc80" (UID: "0546d855-6190-43a0-8fd3-7897c1c9dc80"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:07 crc kubenswrapper[4784]: I0123 06:57:07.888953 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-757jw\" (UniqueName: \"kubernetes.io/projected/0546d855-6190-43a0-8fd3-7897c1c9dc80-kube-api-access-757jw\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:07 crc kubenswrapper[4784]: I0123 06:57:07.888994 4784 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0546d855-6190-43a0-8fd3-7897c1c9dc80-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:07 crc kubenswrapper[4784]: I0123 06:57:07.889006 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0546d855-6190-43a0-8fd3-7897c1c9dc80-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.104408 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" event={"ID":"0546d855-6190-43a0-8fd3-7897c1c9dc80","Type":"ContainerDied","Data":"af76b0f3f83ac69bd9e77c8b22eefa2effb8a55f247b6650f2fc3c41a262244b"} Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.104486 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af76b0f3f83ac69bd9e77c8b22eefa2effb8a55f247b6650f2fc3c41a262244b" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.104555 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zfrjd" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.190787 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn"] Jan 23 06:57:08 crc kubenswrapper[4784]: E0123 06:57:08.191260 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0546d855-6190-43a0-8fd3-7897c1c9dc80" containerName="ssh-known-hosts-edpm-deployment" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.191282 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0546d855-6190-43a0-8fd3-7897c1c9dc80" containerName="ssh-known-hosts-edpm-deployment" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.191501 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0546d855-6190-43a0-8fd3-7897c1c9dc80" containerName="ssh-known-hosts-edpm-deployment" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.192256 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.196287 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.196570 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.196711 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.208314 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.232110 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn"] Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.298197 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebc0675f-b9ae-44e0-bfb8-601977c9936c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-svmtn\" (UID: \"ebc0675f-b9ae-44e0-bfb8-601977c9936c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.298366 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebc0675f-b9ae-44e0-bfb8-601977c9936c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-svmtn\" (UID: \"ebc0675f-b9ae-44e0-bfb8-601977c9936c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.298422 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjr7h\" (UniqueName: \"kubernetes.io/projected/ebc0675f-b9ae-44e0-bfb8-601977c9936c-kube-api-access-vjr7h\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-svmtn\" (UID: \"ebc0675f-b9ae-44e0-bfb8-601977c9936c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.400873 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebc0675f-b9ae-44e0-bfb8-601977c9936c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-svmtn\" (UID: \"ebc0675f-b9ae-44e0-bfb8-601977c9936c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.401443 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebc0675f-b9ae-44e0-bfb8-601977c9936c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-svmtn\" (UID: \"ebc0675f-b9ae-44e0-bfb8-601977c9936c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.401511 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjr7h\" (UniqueName: \"kubernetes.io/projected/ebc0675f-b9ae-44e0-bfb8-601977c9936c-kube-api-access-vjr7h\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-svmtn\" (UID: \"ebc0675f-b9ae-44e0-bfb8-601977c9936c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.407631 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebc0675f-b9ae-44e0-bfb8-601977c9936c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-svmtn\" (UID: \"ebc0675f-b9ae-44e0-bfb8-601977c9936c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.413491 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebc0675f-b9ae-44e0-bfb8-601977c9936c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-svmtn\" (UID: \"ebc0675f-b9ae-44e0-bfb8-601977c9936c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.427966 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjr7h\" (UniqueName: \"kubernetes.io/projected/ebc0675f-b9ae-44e0-bfb8-601977c9936c-kube-api-access-vjr7h\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-svmtn\" (UID: \"ebc0675f-b9ae-44e0-bfb8-601977c9936c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" Jan 23 06:57:08 crc kubenswrapper[4784]: I0123 06:57:08.510131 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" Jan 23 06:57:09 crc kubenswrapper[4784]: I0123 06:57:09.181579 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn"] Jan 23 06:57:10 crc kubenswrapper[4784]: I0123 06:57:10.140137 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" event={"ID":"ebc0675f-b9ae-44e0-bfb8-601977c9936c","Type":"ContainerStarted","Data":"b9ad78dbdc8af2577bc786b0569e32e02848c1566b19e08fd4d853291e9c0352"} Jan 23 06:57:10 crc kubenswrapper[4784]: I0123 06:57:10.140675 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" event={"ID":"ebc0675f-b9ae-44e0-bfb8-601977c9936c","Type":"ContainerStarted","Data":"d28d2e6dd5bdabd78c9690fb7edd9eba35ccfabb27295ef4d3f480537a3d75a7"} Jan 23 06:57:10 crc kubenswrapper[4784]: I0123 06:57:10.170253 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" podStartSLOduration=1.7621197240000002 podStartE2EDuration="2.170217609s" podCreationTimestamp="2026-01-23 06:57:08 +0000 UTC" firstStartedPulling="2026-01-23 06:57:09.199780972 +0000 UTC m=+2232.432288946" lastFinishedPulling="2026-01-23 06:57:09.607878857 +0000 UTC m=+2232.840386831" observedRunningTime="2026-01-23 06:57:10.163991402 +0000 UTC m=+2233.396499396" watchObservedRunningTime="2026-01-23 06:57:10.170217609 +0000 UTC m=+2233.402725583" Jan 23 06:57:19 crc kubenswrapper[4784]: I0123 06:57:19.250298 4784 generic.go:334] "Generic (PLEG): container finished" podID="ebc0675f-b9ae-44e0-bfb8-601977c9936c" containerID="b9ad78dbdc8af2577bc786b0569e32e02848c1566b19e08fd4d853291e9c0352" exitCode=0 Jan 23 06:57:19 crc kubenswrapper[4784]: I0123 06:57:19.250426 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" event={"ID":"ebc0675f-b9ae-44e0-bfb8-601977c9936c","Type":"ContainerDied","Data":"b9ad78dbdc8af2577bc786b0569e32e02848c1566b19e08fd4d853291e9c0352"} Jan 23 06:57:20 crc kubenswrapper[4784]: I0123 06:57:20.731143 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" Jan 23 06:57:20 crc kubenswrapper[4784]: I0123 06:57:20.897557 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebc0675f-b9ae-44e0-bfb8-601977c9936c-inventory\") pod \"ebc0675f-b9ae-44e0-bfb8-601977c9936c\" (UID: \"ebc0675f-b9ae-44e0-bfb8-601977c9936c\") " Jan 23 06:57:20 crc kubenswrapper[4784]: I0123 06:57:20.897795 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebc0675f-b9ae-44e0-bfb8-601977c9936c-ssh-key-openstack-edpm-ipam\") pod \"ebc0675f-b9ae-44e0-bfb8-601977c9936c\" (UID: \"ebc0675f-b9ae-44e0-bfb8-601977c9936c\") " Jan 23 06:57:20 crc kubenswrapper[4784]: I0123 06:57:20.897894 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjr7h\" (UniqueName: \"kubernetes.io/projected/ebc0675f-b9ae-44e0-bfb8-601977c9936c-kube-api-access-vjr7h\") pod \"ebc0675f-b9ae-44e0-bfb8-601977c9936c\" (UID: \"ebc0675f-b9ae-44e0-bfb8-601977c9936c\") " Jan 23 06:57:20 crc kubenswrapper[4784]: I0123 06:57:20.906187 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc0675f-b9ae-44e0-bfb8-601977c9936c-kube-api-access-vjr7h" (OuterVolumeSpecName: "kube-api-access-vjr7h") pod "ebc0675f-b9ae-44e0-bfb8-601977c9936c" (UID: "ebc0675f-b9ae-44e0-bfb8-601977c9936c"). InnerVolumeSpecName "kube-api-access-vjr7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:20 crc kubenswrapper[4784]: I0123 06:57:20.932669 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebc0675f-b9ae-44e0-bfb8-601977c9936c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ebc0675f-b9ae-44e0-bfb8-601977c9936c" (UID: "ebc0675f-b9ae-44e0-bfb8-601977c9936c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:20 crc kubenswrapper[4784]: I0123 06:57:20.955313 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebc0675f-b9ae-44e0-bfb8-601977c9936c-inventory" (OuterVolumeSpecName: "inventory") pod "ebc0675f-b9ae-44e0-bfb8-601977c9936c" (UID: "ebc0675f-b9ae-44e0-bfb8-601977c9936c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.001837 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebc0675f-b9ae-44e0-bfb8-601977c9936c-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.001882 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebc0675f-b9ae-44e0-bfb8-601977c9936c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.001897 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjr7h\" (UniqueName: \"kubernetes.io/projected/ebc0675f-b9ae-44e0-bfb8-601977c9936c-kube-api-access-vjr7h\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.273317 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" event={"ID":"ebc0675f-b9ae-44e0-bfb8-601977c9936c","Type":"ContainerDied","Data":"d28d2e6dd5bdabd78c9690fb7edd9eba35ccfabb27295ef4d3f480537a3d75a7"} Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.273359 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d28d2e6dd5bdabd78c9690fb7edd9eba35ccfabb27295ef4d3f480537a3d75a7" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.273423 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-svmtn" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.407562 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb"] Jan 23 06:57:21 crc kubenswrapper[4784]: E0123 06:57:21.408579 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc0675f-b9ae-44e0-bfb8-601977c9936c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.408608 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc0675f-b9ae-44e0-bfb8-601977c9936c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.408834 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebc0675f-b9ae-44e0-bfb8-601977c9936c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.410164 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.415422 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.415600 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.415631 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.416589 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.422648 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb"] Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.534349 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb\" (UID: \"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.534721 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb\" (UID: \"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.534813 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pf5n\" (UniqueName: \"kubernetes.io/projected/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-kube-api-access-8pf5n\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb\" (UID: \"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.636839 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb\" (UID: \"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.636944 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pf5n\" (UniqueName: \"kubernetes.io/projected/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-kube-api-access-8pf5n\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb\" (UID: \"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.637065 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb\" (UID: \"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.641739 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb\" (UID: \"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.647366 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb\" (UID: \"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.658637 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pf5n\" (UniqueName: \"kubernetes.io/projected/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-kube-api-access-8pf5n\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb\" (UID: \"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" Jan 23 06:57:21 crc kubenswrapper[4784]: I0123 06:57:21.741974 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" Jan 23 06:57:22 crc kubenswrapper[4784]: I0123 06:57:22.388716 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb"] Jan 23 06:57:23 crc kubenswrapper[4784]: I0123 06:57:23.305927 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" event={"ID":"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10","Type":"ContainerStarted","Data":"a53736fbfffcdb244400ae5eb537efd32c570f51b03cdb0d2643a4fc9ac18bd4"} Jan 23 06:57:24 crc kubenswrapper[4784]: I0123 06:57:24.317721 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" event={"ID":"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10","Type":"ContainerStarted","Data":"97338124a3443f0a6c42f644a0165f118c9029747ad70691265dc13b832ea19e"} Jan 23 06:57:24 crc kubenswrapper[4784]: I0123 06:57:24.351268 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" podStartSLOduration=2.765974693 podStartE2EDuration="3.351229436s" podCreationTimestamp="2026-01-23 06:57:21 +0000 UTC" firstStartedPulling="2026-01-23 06:57:22.397929307 +0000 UTC m=+2245.630437281" lastFinishedPulling="2026-01-23 06:57:22.98318405 +0000 UTC m=+2246.215692024" observedRunningTime="2026-01-23 06:57:24.342144893 +0000 UTC m=+2247.574652887" watchObservedRunningTime="2026-01-23 06:57:24.351229436 +0000 UTC m=+2247.583737420" Jan 23 06:57:28 crc kubenswrapper[4784]: I0123 06:57:28.659653 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cbgv8"] Jan 23 06:57:28 crc kubenswrapper[4784]: I0123 06:57:28.665011 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:28 crc kubenswrapper[4784]: I0123 06:57:28.695845 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cbgv8"] Jan 23 06:57:28 crc kubenswrapper[4784]: I0123 06:57:28.806235 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zfk7\" (UniqueName: \"kubernetes.io/projected/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-kube-api-access-9zfk7\") pod \"community-operators-cbgv8\" (UID: \"d72f75f3-1efd-46d5-b166-8f0d36ec59d8\") " pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:28 crc kubenswrapper[4784]: I0123 06:57:28.806462 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-utilities\") pod \"community-operators-cbgv8\" (UID: \"d72f75f3-1efd-46d5-b166-8f0d36ec59d8\") " pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:28 crc kubenswrapper[4784]: I0123 06:57:28.806491 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-catalog-content\") pod \"community-operators-cbgv8\" (UID: \"d72f75f3-1efd-46d5-b166-8f0d36ec59d8\") " pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:28 crc kubenswrapper[4784]: I0123 06:57:28.908990 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-utilities\") pod \"community-operators-cbgv8\" (UID: \"d72f75f3-1efd-46d5-b166-8f0d36ec59d8\") " pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:28 crc kubenswrapper[4784]: I0123 06:57:28.909055 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-catalog-content\") pod \"community-operators-cbgv8\" (UID: \"d72f75f3-1efd-46d5-b166-8f0d36ec59d8\") " pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:28 crc kubenswrapper[4784]: I0123 06:57:28.909134 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zfk7\" (UniqueName: \"kubernetes.io/projected/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-kube-api-access-9zfk7\") pod \"community-operators-cbgv8\" (UID: \"d72f75f3-1efd-46d5-b166-8f0d36ec59d8\") " pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:28 crc kubenswrapper[4784]: I0123 06:57:28.909892 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-catalog-content\") pod \"community-operators-cbgv8\" (UID: \"d72f75f3-1efd-46d5-b166-8f0d36ec59d8\") " pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:28 crc kubenswrapper[4784]: I0123 06:57:28.909914 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-utilities\") pod \"community-operators-cbgv8\" (UID: \"d72f75f3-1efd-46d5-b166-8f0d36ec59d8\") " pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:28 crc kubenswrapper[4784]: I0123 06:57:28.937179 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zfk7\" (UniqueName: \"kubernetes.io/projected/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-kube-api-access-9zfk7\") pod \"community-operators-cbgv8\" (UID: \"d72f75f3-1efd-46d5-b166-8f0d36ec59d8\") " pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:29 crc kubenswrapper[4784]: I0123 06:57:29.009497 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:29 crc kubenswrapper[4784]: I0123 06:57:29.667365 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cbgv8"] Jan 23 06:57:30 crc kubenswrapper[4784]: I0123 06:57:30.483478 4784 generic.go:334] "Generic (PLEG): container finished" podID="d72f75f3-1efd-46d5-b166-8f0d36ec59d8" containerID="0b4c353a8e7a80ba692b1042597610e56aff8aaaf83dfe19040d0076c0aa3cb2" exitCode=0 Jan 23 06:57:30 crc kubenswrapper[4784]: I0123 06:57:30.483566 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbgv8" event={"ID":"d72f75f3-1efd-46d5-b166-8f0d36ec59d8","Type":"ContainerDied","Data":"0b4c353a8e7a80ba692b1042597610e56aff8aaaf83dfe19040d0076c0aa3cb2"} Jan 23 06:57:30 crc kubenswrapper[4784]: I0123 06:57:30.483996 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbgv8" event={"ID":"d72f75f3-1efd-46d5-b166-8f0d36ec59d8","Type":"ContainerStarted","Data":"454a8329b2e7f87258181cf7843cec484c2f5e9d594c2609e397c43f75c57a4d"} Jan 23 06:57:31 crc kubenswrapper[4784]: I0123 06:57:31.495608 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbgv8" event={"ID":"d72f75f3-1efd-46d5-b166-8f0d36ec59d8","Type":"ContainerStarted","Data":"4c6da8b548fe9545fd46acd1fda88df5393a417662c312d152f901d6bd0a4a40"} Jan 23 06:57:32 crc kubenswrapper[4784]: I0123 06:57:32.511773 4784 generic.go:334] "Generic (PLEG): container finished" podID="d72f75f3-1efd-46d5-b166-8f0d36ec59d8" containerID="4c6da8b548fe9545fd46acd1fda88df5393a417662c312d152f901d6bd0a4a40" exitCode=0 Jan 23 06:57:32 crc kubenswrapper[4784]: I0123 06:57:32.511925 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbgv8" event={"ID":"d72f75f3-1efd-46d5-b166-8f0d36ec59d8","Type":"ContainerDied","Data":"4c6da8b548fe9545fd46acd1fda88df5393a417662c312d152f901d6bd0a4a40"} Jan 23 06:57:33 crc kubenswrapper[4784]: I0123 06:57:33.526285 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbgv8" event={"ID":"d72f75f3-1efd-46d5-b166-8f0d36ec59d8","Type":"ContainerStarted","Data":"f56a63b3bd610d51fd10616805e71046c814654637e01006a54d02e4c92e9c11"} Jan 23 06:57:33 crc kubenswrapper[4784]: I0123 06:57:33.550307 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cbgv8" podStartSLOduration=3.107219311 podStartE2EDuration="5.550278995s" podCreationTimestamp="2026-01-23 06:57:28 +0000 UTC" firstStartedPulling="2026-01-23 06:57:30.486330154 +0000 UTC m=+2253.718838139" lastFinishedPulling="2026-01-23 06:57:32.929389849 +0000 UTC m=+2256.161897823" observedRunningTime="2026-01-23 06:57:33.546249433 +0000 UTC m=+2256.778757417" watchObservedRunningTime="2026-01-23 06:57:33.550278995 +0000 UTC m=+2256.782786969" Jan 23 06:57:34 crc kubenswrapper[4784]: E0123 06:57:34.176945 4784 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a2d0f94_cbd7_4ff7_9fd0_53a9ac80ed10.slice/crio-conmon-97338124a3443f0a6c42f644a0165f118c9029747ad70691265dc13b832ea19e.scope\": RecentStats: unable to find data in memory cache]" Jan 23 06:57:34 crc kubenswrapper[4784]: I0123 06:57:34.544131 4784 generic.go:334] "Generic (PLEG): container finished" podID="8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10" containerID="97338124a3443f0a6c42f644a0165f118c9029747ad70691265dc13b832ea19e" exitCode=0 Jan 23 06:57:34 crc kubenswrapper[4784]: I0123 06:57:34.544214 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" event={"ID":"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10","Type":"ContainerDied","Data":"97338124a3443f0a6c42f644a0165f118c9029747ad70691265dc13b832ea19e"} Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.171728 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.314856 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pf5n\" (UniqueName: \"kubernetes.io/projected/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-kube-api-access-8pf5n\") pod \"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10\" (UID: \"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10\") " Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.315139 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-ssh-key-openstack-edpm-ipam\") pod \"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10\" (UID: \"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10\") " Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.315341 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-inventory\") pod \"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10\" (UID: \"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10\") " Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.321801 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-kube-api-access-8pf5n" (OuterVolumeSpecName: "kube-api-access-8pf5n") pod "8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10" (UID: "8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10"). InnerVolumeSpecName "kube-api-access-8pf5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.349289 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-inventory" (OuterVolumeSpecName: "inventory") pod "8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10" (UID: "8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.350473 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10" (UID: "8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.418403 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.418458 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pf5n\" (UniqueName: \"kubernetes.io/projected/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-kube-api-access-8pf5n\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.418474 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.568065 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" event={"ID":"8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10","Type":"ContainerDied","Data":"a53736fbfffcdb244400ae5eb537efd32c570f51b03cdb0d2643a4fc9ac18bd4"} Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.568557 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a53736fbfffcdb244400ae5eb537efd32c570f51b03cdb0d2643a4fc9ac18bd4" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.568254 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.686624 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw"] Jan 23 06:57:36 crc kubenswrapper[4784]: E0123 06:57:36.687241 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.687269 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.687611 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.688672 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.694976 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.694985 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.695226 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.695393 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.695519 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.695578 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.695615 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.695859 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.706729 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw"] Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.831332 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.831433 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.831461 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.831495 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.831537 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.831624 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.832014 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.832150 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.832194 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.832444 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.832549 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.832797 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.832833 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.832886 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rbvj\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-kube-api-access-2rbvj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.935236 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.935308 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.935332 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.935387 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.935415 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.935445 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.935481 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.936515 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.936570 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.936657 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.936713 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.936864 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.936896 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.936931 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rbvj\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-kube-api-access-2rbvj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.943297 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.944398 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.944484 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.944514 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.945569 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.945801 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.946481 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.946879 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.947369 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.948884 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.950477 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.950825 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.952101 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:36 crc kubenswrapper[4784]: I0123 06:57:36.961783 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rbvj\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-kube-api-access-2rbvj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-k75hw\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:37 crc kubenswrapper[4784]: I0123 06:57:37.013331 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:57:37 crc kubenswrapper[4784]: I0123 06:57:37.617860 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw"] Jan 23 06:57:37 crc kubenswrapper[4784]: W0123 06:57:37.622963 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb38255f5_498b_4d24_9754_1c994d7b260c.slice/crio-80dcbe688d97dce597a786b98782618730775ed7a469378edb590793f2582b99 WatchSource:0}: Error finding container 80dcbe688d97dce597a786b98782618730775ed7a469378edb590793f2582b99: Status 404 returned error can't find the container with id 80dcbe688d97dce597a786b98782618730775ed7a469378edb590793f2582b99 Jan 23 06:57:38 crc kubenswrapper[4784]: I0123 06:57:38.592717 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" event={"ID":"b38255f5-498b-4d24-9754-1c994d7b260c","Type":"ContainerStarted","Data":"c285cf027a90b62a68cd80bfb260d1bfbeed7f1cb3a732df54a62964e2e24b2d"} Jan 23 06:57:38 crc kubenswrapper[4784]: I0123 06:57:38.593471 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" event={"ID":"b38255f5-498b-4d24-9754-1c994d7b260c","Type":"ContainerStarted","Data":"80dcbe688d97dce597a786b98782618730775ed7a469378edb590793f2582b99"} Jan 23 06:57:38 crc kubenswrapper[4784]: I0123 06:57:38.621932 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" podStartSLOduration=2.037152366 podStartE2EDuration="2.621901753s" podCreationTimestamp="2026-01-23 06:57:36 +0000 UTC" firstStartedPulling="2026-01-23 06:57:37.627287348 +0000 UTC m=+2260.859795322" lastFinishedPulling="2026-01-23 06:57:38.212036735 +0000 UTC m=+2261.444544709" observedRunningTime="2026-01-23 06:57:38.612914173 +0000 UTC m=+2261.845422137" watchObservedRunningTime="2026-01-23 06:57:38.621901753 +0000 UTC m=+2261.854409727" Jan 23 06:57:39 crc kubenswrapper[4784]: I0123 06:57:39.009862 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:39 crc kubenswrapper[4784]: I0123 06:57:39.010558 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:39 crc kubenswrapper[4784]: I0123 06:57:39.074062 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:39 crc kubenswrapper[4784]: I0123 06:57:39.663161 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:39 crc kubenswrapper[4784]: I0123 06:57:39.723714 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cbgv8"] Jan 23 06:57:41 crc kubenswrapper[4784]: I0123 06:57:41.621432 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cbgv8" podUID="d72f75f3-1efd-46d5-b166-8f0d36ec59d8" containerName="registry-server" containerID="cri-o://f56a63b3bd610d51fd10616805e71046c814654637e01006a54d02e4c92e9c11" gracePeriod=2 Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.156570 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.320575 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-catalog-content\") pod \"d72f75f3-1efd-46d5-b166-8f0d36ec59d8\" (UID: \"d72f75f3-1efd-46d5-b166-8f0d36ec59d8\") " Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.320740 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-utilities\") pod \"d72f75f3-1efd-46d5-b166-8f0d36ec59d8\" (UID: \"d72f75f3-1efd-46d5-b166-8f0d36ec59d8\") " Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.320868 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zfk7\" (UniqueName: \"kubernetes.io/projected/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-kube-api-access-9zfk7\") pod \"d72f75f3-1efd-46d5-b166-8f0d36ec59d8\" (UID: \"d72f75f3-1efd-46d5-b166-8f0d36ec59d8\") " Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.323465 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-utilities" (OuterVolumeSpecName: "utilities") pod "d72f75f3-1efd-46d5-b166-8f0d36ec59d8" (UID: "d72f75f3-1efd-46d5-b166-8f0d36ec59d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.332063 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-kube-api-access-9zfk7" (OuterVolumeSpecName: "kube-api-access-9zfk7") pod "d72f75f3-1efd-46d5-b166-8f0d36ec59d8" (UID: "d72f75f3-1efd-46d5-b166-8f0d36ec59d8"). InnerVolumeSpecName "kube-api-access-9zfk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.389862 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d72f75f3-1efd-46d5-b166-8f0d36ec59d8" (UID: "d72f75f3-1efd-46d5-b166-8f0d36ec59d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.423373 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zfk7\" (UniqueName: \"kubernetes.io/projected/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-kube-api-access-9zfk7\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.423424 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.423436 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d72f75f3-1efd-46d5-b166-8f0d36ec59d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.637692 4784 generic.go:334] "Generic (PLEG): container finished" podID="d72f75f3-1efd-46d5-b166-8f0d36ec59d8" containerID="f56a63b3bd610d51fd10616805e71046c814654637e01006a54d02e4c92e9c11" exitCode=0 Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.637772 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbgv8" event={"ID":"d72f75f3-1efd-46d5-b166-8f0d36ec59d8","Type":"ContainerDied","Data":"f56a63b3bd610d51fd10616805e71046c814654637e01006a54d02e4c92e9c11"} Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.638684 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cbgv8" event={"ID":"d72f75f3-1efd-46d5-b166-8f0d36ec59d8","Type":"ContainerDied","Data":"454a8329b2e7f87258181cf7843cec484c2f5e9d594c2609e397c43f75c57a4d"} Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.637865 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cbgv8" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.638731 4784 scope.go:117] "RemoveContainer" containerID="f56a63b3bd610d51fd10616805e71046c814654637e01006a54d02e4c92e9c11" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.689905 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cbgv8"] Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.690070 4784 scope.go:117] "RemoveContainer" containerID="4c6da8b548fe9545fd46acd1fda88df5393a417662c312d152f901d6bd0a4a40" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.704213 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cbgv8"] Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.724360 4784 scope.go:117] "RemoveContainer" containerID="0b4c353a8e7a80ba692b1042597610e56aff8aaaf83dfe19040d0076c0aa3cb2" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.763713 4784 scope.go:117] "RemoveContainer" containerID="f56a63b3bd610d51fd10616805e71046c814654637e01006a54d02e4c92e9c11" Jan 23 06:57:42 crc kubenswrapper[4784]: E0123 06:57:42.764531 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f56a63b3bd610d51fd10616805e71046c814654637e01006a54d02e4c92e9c11\": container with ID starting with f56a63b3bd610d51fd10616805e71046c814654637e01006a54d02e4c92e9c11 not found: ID does not exist" containerID="f56a63b3bd610d51fd10616805e71046c814654637e01006a54d02e4c92e9c11" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.764594 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f56a63b3bd610d51fd10616805e71046c814654637e01006a54d02e4c92e9c11"} err="failed to get container status \"f56a63b3bd610d51fd10616805e71046c814654637e01006a54d02e4c92e9c11\": rpc error: code = NotFound desc = could not find container \"f56a63b3bd610d51fd10616805e71046c814654637e01006a54d02e4c92e9c11\": container with ID starting with f56a63b3bd610d51fd10616805e71046c814654637e01006a54d02e4c92e9c11 not found: ID does not exist" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.764699 4784 scope.go:117] "RemoveContainer" containerID="4c6da8b548fe9545fd46acd1fda88df5393a417662c312d152f901d6bd0a4a40" Jan 23 06:57:42 crc kubenswrapper[4784]: E0123 06:57:42.765355 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c6da8b548fe9545fd46acd1fda88df5393a417662c312d152f901d6bd0a4a40\": container with ID starting with 4c6da8b548fe9545fd46acd1fda88df5393a417662c312d152f901d6bd0a4a40 not found: ID does not exist" containerID="4c6da8b548fe9545fd46acd1fda88df5393a417662c312d152f901d6bd0a4a40" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.765407 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c6da8b548fe9545fd46acd1fda88df5393a417662c312d152f901d6bd0a4a40"} err="failed to get container status \"4c6da8b548fe9545fd46acd1fda88df5393a417662c312d152f901d6bd0a4a40\": rpc error: code = NotFound desc = could not find container \"4c6da8b548fe9545fd46acd1fda88df5393a417662c312d152f901d6bd0a4a40\": container with ID starting with 4c6da8b548fe9545fd46acd1fda88df5393a417662c312d152f901d6bd0a4a40 not found: ID does not exist" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.765444 4784 scope.go:117] "RemoveContainer" containerID="0b4c353a8e7a80ba692b1042597610e56aff8aaaf83dfe19040d0076c0aa3cb2" Jan 23 06:57:42 crc kubenswrapper[4784]: E0123 06:57:42.765663 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b4c353a8e7a80ba692b1042597610e56aff8aaaf83dfe19040d0076c0aa3cb2\": container with ID starting with 0b4c353a8e7a80ba692b1042597610e56aff8aaaf83dfe19040d0076c0aa3cb2 not found: ID does not exist" containerID="0b4c353a8e7a80ba692b1042597610e56aff8aaaf83dfe19040d0076c0aa3cb2" Jan 23 06:57:42 crc kubenswrapper[4784]: I0123 06:57:42.765682 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b4c353a8e7a80ba692b1042597610e56aff8aaaf83dfe19040d0076c0aa3cb2"} err="failed to get container status \"0b4c353a8e7a80ba692b1042597610e56aff8aaaf83dfe19040d0076c0aa3cb2\": rpc error: code = NotFound desc = could not find container \"0b4c353a8e7a80ba692b1042597610e56aff8aaaf83dfe19040d0076c0aa3cb2\": container with ID starting with 0b4c353a8e7a80ba692b1042597610e56aff8aaaf83dfe19040d0076c0aa3cb2 not found: ID does not exist" Jan 23 06:57:43 crc kubenswrapper[4784]: I0123 06:57:43.268059 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d72f75f3-1efd-46d5-b166-8f0d36ec59d8" path="/var/lib/kubelet/pods/d72f75f3-1efd-46d5-b166-8f0d36ec59d8/volumes" Jan 23 06:57:53 crc kubenswrapper[4784]: I0123 06:57:53.603660 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:57:53 crc kubenswrapper[4784]: I0123 06:57:53.604608 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:58:19 crc kubenswrapper[4784]: I0123 06:58:19.045351 4784 generic.go:334] "Generic (PLEG): container finished" podID="b38255f5-498b-4d24-9754-1c994d7b260c" containerID="c285cf027a90b62a68cd80bfb260d1bfbeed7f1cb3a732df54a62964e2e24b2d" exitCode=0 Jan 23 06:58:19 crc kubenswrapper[4784]: I0123 06:58:19.045460 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" event={"ID":"b38255f5-498b-4d24-9754-1c994d7b260c","Type":"ContainerDied","Data":"c285cf027a90b62a68cd80bfb260d1bfbeed7f1cb3a732df54a62964e2e24b2d"} Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.576246 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.722197 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-nova-combined-ca-bundle\") pod \"b38255f5-498b-4d24-9754-1c994d7b260c\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.722291 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-neutron-metadata-combined-ca-bundle\") pod \"b38255f5-498b-4d24-9754-1c994d7b260c\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.722350 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"b38255f5-498b-4d24-9754-1c994d7b260c\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.722377 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"b38255f5-498b-4d24-9754-1c994d7b260c\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.722440 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-telemetry-combined-ca-bundle\") pod \"b38255f5-498b-4d24-9754-1c994d7b260c\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.722479 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-repo-setup-combined-ca-bundle\") pod \"b38255f5-498b-4d24-9754-1c994d7b260c\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.722517 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-ovn-combined-ca-bundle\") pod \"b38255f5-498b-4d24-9754-1c994d7b260c\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.722606 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-libvirt-combined-ca-bundle\") pod \"b38255f5-498b-4d24-9754-1c994d7b260c\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.722671 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-ssh-key-openstack-edpm-ipam\") pod \"b38255f5-498b-4d24-9754-1c994d7b260c\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.722793 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-inventory\") pod \"b38255f5-498b-4d24-9754-1c994d7b260c\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.722816 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-bootstrap-combined-ca-bundle\") pod \"b38255f5-498b-4d24-9754-1c994d7b260c\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.722860 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"b38255f5-498b-4d24-9754-1c994d7b260c\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.722899 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rbvj\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-kube-api-access-2rbvj\") pod \"b38255f5-498b-4d24-9754-1c994d7b260c\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.722979 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"b38255f5-498b-4d24-9754-1c994d7b260c\" (UID: \"b38255f5-498b-4d24-9754-1c994d7b260c\") " Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.731245 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "b38255f5-498b-4d24-9754-1c994d7b260c" (UID: "b38255f5-498b-4d24-9754-1c994d7b260c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.732528 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b38255f5-498b-4d24-9754-1c994d7b260c" (UID: "b38255f5-498b-4d24-9754-1c994d7b260c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.734605 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b38255f5-498b-4d24-9754-1c994d7b260c" (UID: "b38255f5-498b-4d24-9754-1c994d7b260c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.735668 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b38255f5-498b-4d24-9754-1c994d7b260c" (UID: "b38255f5-498b-4d24-9754-1c994d7b260c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.735731 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "b38255f5-498b-4d24-9754-1c994d7b260c" (UID: "b38255f5-498b-4d24-9754-1c994d7b260c"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.736687 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "b38255f5-498b-4d24-9754-1c994d7b260c" (UID: "b38255f5-498b-4d24-9754-1c994d7b260c"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.737153 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b38255f5-498b-4d24-9754-1c994d7b260c" (UID: "b38255f5-498b-4d24-9754-1c994d7b260c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.737607 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b38255f5-498b-4d24-9754-1c994d7b260c" (UID: "b38255f5-498b-4d24-9754-1c994d7b260c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.738308 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b38255f5-498b-4d24-9754-1c994d7b260c" (UID: "b38255f5-498b-4d24-9754-1c994d7b260c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.748338 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-kube-api-access-2rbvj" (OuterVolumeSpecName: "kube-api-access-2rbvj") pod "b38255f5-498b-4d24-9754-1c994d7b260c" (UID: "b38255f5-498b-4d24-9754-1c994d7b260c"). InnerVolumeSpecName "kube-api-access-2rbvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.748697 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "b38255f5-498b-4d24-9754-1c994d7b260c" (UID: "b38255f5-498b-4d24-9754-1c994d7b260c"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.748785 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "b38255f5-498b-4d24-9754-1c994d7b260c" (UID: "b38255f5-498b-4d24-9754-1c994d7b260c"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.775702 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b38255f5-498b-4d24-9754-1c994d7b260c" (UID: "b38255f5-498b-4d24-9754-1c994d7b260c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.775966 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-inventory" (OuterVolumeSpecName: "inventory") pod "b38255f5-498b-4d24-9754-1c994d7b260c" (UID: "b38255f5-498b-4d24-9754-1c994d7b260c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.826276 4784 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.826318 4784 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.826329 4784 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.826344 4784 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.826356 4784 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.826369 4784 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.826377 4784 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.826389 4784 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.826400 4784 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.826408 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.826417 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.826425 4784 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38255f5-498b-4d24-9754-1c994d7b260c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.826433 4784 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:20 crc kubenswrapper[4784]: I0123 06:58:20.826443 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rbvj\" (UniqueName: \"kubernetes.io/projected/b38255f5-498b-4d24-9754-1c994d7b260c-kube-api-access-2rbvj\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.066890 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" event={"ID":"b38255f5-498b-4d24-9754-1c994d7b260c","Type":"ContainerDied","Data":"80dcbe688d97dce597a786b98782618730775ed7a469378edb590793f2582b99"} Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.066972 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80dcbe688d97dce597a786b98782618730775ed7a469378edb590793f2582b99" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.067047 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-k75hw" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.347444 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh"] Jan 23 06:58:21 crc kubenswrapper[4784]: E0123 06:58:21.348113 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72f75f3-1efd-46d5-b166-8f0d36ec59d8" containerName="registry-server" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.348136 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72f75f3-1efd-46d5-b166-8f0d36ec59d8" containerName="registry-server" Jan 23 06:58:21 crc kubenswrapper[4784]: E0123 06:58:21.348154 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72f75f3-1efd-46d5-b166-8f0d36ec59d8" containerName="extract-content" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.348162 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72f75f3-1efd-46d5-b166-8f0d36ec59d8" containerName="extract-content" Jan 23 06:58:21 crc kubenswrapper[4784]: E0123 06:58:21.348188 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38255f5-498b-4d24-9754-1c994d7b260c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.348197 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38255f5-498b-4d24-9754-1c994d7b260c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 06:58:21 crc kubenswrapper[4784]: E0123 06:58:21.348207 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72f75f3-1efd-46d5-b166-8f0d36ec59d8" containerName="extract-utilities" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.348214 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72f75f3-1efd-46d5-b166-8f0d36ec59d8" containerName="extract-utilities" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.348490 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="b38255f5-498b-4d24-9754-1c994d7b260c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.348521 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72f75f3-1efd-46d5-b166-8f0d36ec59d8" containerName="registry-server" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.349511 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.353286 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.353371 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.353417 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.353515 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.354174 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.362585 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh"] Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.441113 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vg9vh\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.441264 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz8h2\" (UniqueName: \"kubernetes.io/projected/cb18257b-963a-49bb-a493-0da8a460532f-kube-api-access-fz8h2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vg9vh\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.441904 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cb18257b-963a-49bb-a493-0da8a460532f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vg9vh\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.441955 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vg9vh\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.442096 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vg9vh\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.544532 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vg9vh\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.544598 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz8h2\" (UniqueName: \"kubernetes.io/projected/cb18257b-963a-49bb-a493-0da8a460532f-kube-api-access-fz8h2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vg9vh\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.544774 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vg9vh\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.544807 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cb18257b-963a-49bb-a493-0da8a460532f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vg9vh\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.544861 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vg9vh\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.547793 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cb18257b-963a-49bb-a493-0da8a460532f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vg9vh\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.553929 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vg9vh\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.557260 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vg9vh\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.561359 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vg9vh\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.583918 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz8h2\" (UniqueName: \"kubernetes.io/projected/cb18257b-963a-49bb-a493-0da8a460532f-kube-api-access-fz8h2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vg9vh\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:21 crc kubenswrapper[4784]: I0123 06:58:21.674056 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:58:22 crc kubenswrapper[4784]: I0123 06:58:22.224981 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh"] Jan 23 06:58:22 crc kubenswrapper[4784]: W0123 06:58:22.225950 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb18257b_963a_49bb_a493_0da8a460532f.slice/crio-e2acb34859ba3fd5e1d032c82bb18d5872d4cd2577cc261e6486031b017b2f70 WatchSource:0}: Error finding container e2acb34859ba3fd5e1d032c82bb18d5872d4cd2577cc261e6486031b017b2f70: Status 404 returned error can't find the container with id e2acb34859ba3fd5e1d032c82bb18d5872d4cd2577cc261e6486031b017b2f70 Jan 23 06:58:23 crc kubenswrapper[4784]: I0123 06:58:23.093710 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" event={"ID":"cb18257b-963a-49bb-a493-0da8a460532f","Type":"ContainerStarted","Data":"d9a1ff3b98d63371d33d4229e5a58a8273f69c7ae88785345151ba5ac09b488e"} Jan 23 06:58:23 crc kubenswrapper[4784]: I0123 06:58:23.094889 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" event={"ID":"cb18257b-963a-49bb-a493-0da8a460532f","Type":"ContainerStarted","Data":"e2acb34859ba3fd5e1d032c82bb18d5872d4cd2577cc261e6486031b017b2f70"} Jan 23 06:58:23 crc kubenswrapper[4784]: I0123 06:58:23.122721 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" podStartSLOduration=1.588894853 podStartE2EDuration="2.122694655s" podCreationTimestamp="2026-01-23 06:58:21 +0000 UTC" firstStartedPulling="2026-01-23 06:58:22.229898647 +0000 UTC m=+2305.462406621" lastFinishedPulling="2026-01-23 06:58:22.763698459 +0000 UTC m=+2305.996206423" observedRunningTime="2026-01-23 06:58:23.116243031 +0000 UTC m=+2306.348751005" watchObservedRunningTime="2026-01-23 06:58:23.122694655 +0000 UTC m=+2306.355202629" Jan 23 06:58:23 crc kubenswrapper[4784]: I0123 06:58:23.602963 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:58:23 crc kubenswrapper[4784]: I0123 06:58:23.603038 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:58:53 crc kubenswrapper[4784]: I0123 06:58:53.603891 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:58:53 crc kubenswrapper[4784]: I0123 06:58:53.604990 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:58:53 crc kubenswrapper[4784]: I0123 06:58:53.605080 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 06:58:53 crc kubenswrapper[4784]: I0123 06:58:53.606346 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:58:53 crc kubenswrapper[4784]: I0123 06:58:53.606426 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" gracePeriod=600 Jan 23 06:58:53 crc kubenswrapper[4784]: E0123 06:58:53.740263 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:58:54 crc kubenswrapper[4784]: I0123 06:58:54.274711 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" exitCode=0 Jan 23 06:58:54 crc kubenswrapper[4784]: I0123 06:58:54.274772 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3"} Jan 23 06:58:54 crc kubenswrapper[4784]: I0123 06:58:54.274880 4784 scope.go:117] "RemoveContainer" containerID="9bd4600bcba967d7f7054c915be757b108173dd1d97a02b48ff9bdbc943173d5" Jan 23 06:58:54 crc kubenswrapper[4784]: I0123 06:58:54.276060 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 06:58:54 crc kubenswrapper[4784]: E0123 06:58:54.276488 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:59:05 crc kubenswrapper[4784]: I0123 06:59:05.254373 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 06:59:05 crc kubenswrapper[4784]: E0123 06:59:05.255665 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:59:16 crc kubenswrapper[4784]: I0123 06:59:16.254626 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 06:59:16 crc kubenswrapper[4784]: E0123 06:59:16.255585 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:59:29 crc kubenswrapper[4784]: I0123 06:59:29.253628 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 06:59:29 crc kubenswrapper[4784]: E0123 06:59:29.254822 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:59:36 crc kubenswrapper[4784]: I0123 06:59:36.757563 4784 generic.go:334] "Generic (PLEG): container finished" podID="cb18257b-963a-49bb-a493-0da8a460532f" containerID="d9a1ff3b98d63371d33d4229e5a58a8273f69c7ae88785345151ba5ac09b488e" exitCode=0 Jan 23 06:59:36 crc kubenswrapper[4784]: I0123 06:59:36.757637 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" event={"ID":"cb18257b-963a-49bb-a493-0da8a460532f","Type":"ContainerDied","Data":"d9a1ff3b98d63371d33d4229e5a58a8273f69c7ae88785345151ba5ac09b488e"} Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.249701 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.376329 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz8h2\" (UniqueName: \"kubernetes.io/projected/cb18257b-963a-49bb-a493-0da8a460532f-kube-api-access-fz8h2\") pod \"cb18257b-963a-49bb-a493-0da8a460532f\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.376444 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-inventory\") pod \"cb18257b-963a-49bb-a493-0da8a460532f\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.376561 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-ssh-key-openstack-edpm-ipam\") pod \"cb18257b-963a-49bb-a493-0da8a460532f\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.376720 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-ovn-combined-ca-bundle\") pod \"cb18257b-963a-49bb-a493-0da8a460532f\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.376849 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cb18257b-963a-49bb-a493-0da8a460532f-ovncontroller-config-0\") pod \"cb18257b-963a-49bb-a493-0da8a460532f\" (UID: \"cb18257b-963a-49bb-a493-0da8a460532f\") " Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.384807 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "cb18257b-963a-49bb-a493-0da8a460532f" (UID: "cb18257b-963a-49bb-a493-0da8a460532f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.385623 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb18257b-963a-49bb-a493-0da8a460532f-kube-api-access-fz8h2" (OuterVolumeSpecName: "kube-api-access-fz8h2") pod "cb18257b-963a-49bb-a493-0da8a460532f" (UID: "cb18257b-963a-49bb-a493-0da8a460532f"). InnerVolumeSpecName "kube-api-access-fz8h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.410548 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb18257b-963a-49bb-a493-0da8a460532f-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "cb18257b-963a-49bb-a493-0da8a460532f" (UID: "cb18257b-963a-49bb-a493-0da8a460532f"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.414211 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-inventory" (OuterVolumeSpecName: "inventory") pod "cb18257b-963a-49bb-a493-0da8a460532f" (UID: "cb18257b-963a-49bb-a493-0da8a460532f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.422308 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cb18257b-963a-49bb-a493-0da8a460532f" (UID: "cb18257b-963a-49bb-a493-0da8a460532f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.479932 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz8h2\" (UniqueName: \"kubernetes.io/projected/cb18257b-963a-49bb-a493-0da8a460532f-kube-api-access-fz8h2\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.479977 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.479992 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.480015 4784 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb18257b-963a-49bb-a493-0da8a460532f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.480027 4784 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cb18257b-963a-49bb-a493-0da8a460532f-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.785920 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" event={"ID":"cb18257b-963a-49bb-a493-0da8a460532f","Type":"ContainerDied","Data":"e2acb34859ba3fd5e1d032c82bb18d5872d4cd2577cc261e6486031b017b2f70"} Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.785990 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2acb34859ba3fd5e1d032c82bb18d5872d4cd2577cc261e6486031b017b2f70" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.785993 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vg9vh" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.901306 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr"] Jan 23 06:59:38 crc kubenswrapper[4784]: E0123 06:59:38.901837 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb18257b-963a-49bb-a493-0da8a460532f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.901864 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb18257b-963a-49bb-a493-0da8a460532f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.902115 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb18257b-963a-49bb-a493-0da8a460532f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.902951 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.907228 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.907543 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.907704 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.908093 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.908193 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.912590 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.924930 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr"] Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.992679 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.993160 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.994851 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.995280 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.995363 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:38 crc kubenswrapper[4784]: I0123 06:59:38.995474 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpssh\" (UniqueName: \"kubernetes.io/projected/f9d1c448-c73e-4e10-8265-5c19080dc923-kube-api-access-dpssh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:39 crc kubenswrapper[4784]: I0123 06:59:39.098092 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:39 crc kubenswrapper[4784]: I0123 06:59:39.098613 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:39 crc kubenswrapper[4784]: I0123 06:59:39.098661 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpssh\" (UniqueName: \"kubernetes.io/projected/f9d1c448-c73e-4e10-8265-5c19080dc923-kube-api-access-dpssh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:39 crc kubenswrapper[4784]: I0123 06:59:39.099272 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:39 crc kubenswrapper[4784]: I0123 06:59:39.099595 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:39 crc kubenswrapper[4784]: I0123 06:59:39.099731 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:39 crc kubenswrapper[4784]: I0123 06:59:39.105528 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:39 crc kubenswrapper[4784]: I0123 06:59:39.107775 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:39 crc kubenswrapper[4784]: I0123 06:59:39.107882 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:39 crc kubenswrapper[4784]: I0123 06:59:39.108548 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:39 crc kubenswrapper[4784]: I0123 06:59:39.108866 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:39 crc kubenswrapper[4784]: I0123 06:59:39.120099 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpssh\" (UniqueName: \"kubernetes.io/projected/f9d1c448-c73e-4e10-8265-5c19080dc923-kube-api-access-dpssh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:39 crc kubenswrapper[4784]: I0123 06:59:39.230483 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 06:59:39 crc kubenswrapper[4784]: I0123 06:59:39.828351 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 06:59:39 crc kubenswrapper[4784]: I0123 06:59:39.834166 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr"] Jan 23 06:59:40 crc kubenswrapper[4784]: I0123 06:59:40.809878 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" event={"ID":"f9d1c448-c73e-4e10-8265-5c19080dc923","Type":"ContainerStarted","Data":"8a59f9ae87c840ec1a4b19f0af45e098cceade67268759f3ed50500597d95d3f"} Jan 23 06:59:40 crc kubenswrapper[4784]: I0123 06:59:40.810565 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" event={"ID":"f9d1c448-c73e-4e10-8265-5c19080dc923","Type":"ContainerStarted","Data":"bde1feae3b1a5deae68215b624368f27dfc8be9b3e078b9def8f58cba972317c"} Jan 23 06:59:41 crc kubenswrapper[4784]: I0123 06:59:41.254924 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 06:59:41 crc kubenswrapper[4784]: E0123 06:59:41.255630 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 06:59:41 crc kubenswrapper[4784]: I0123 06:59:41.842160 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" podStartSLOduration=3.185023829 podStartE2EDuration="3.842125706s" podCreationTimestamp="2026-01-23 06:59:38 +0000 UTC" firstStartedPulling="2026-01-23 06:59:39.828029874 +0000 UTC m=+2383.060537848" lastFinishedPulling="2026-01-23 06:59:40.485131751 +0000 UTC m=+2383.717639725" observedRunningTime="2026-01-23 06:59:41.837151425 +0000 UTC m=+2385.069659399" watchObservedRunningTime="2026-01-23 06:59:41.842125706 +0000 UTC m=+2385.074633680" Jan 23 06:59:55 crc kubenswrapper[4784]: I0123 06:59:55.254551 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 06:59:55 crc kubenswrapper[4784]: E0123 06:59:55.255966 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:00:00 crc kubenswrapper[4784]: I0123 07:00:00.154230 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj"] Jan 23 07:00:00 crc kubenswrapper[4784]: I0123 07:00:00.157220 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" Jan 23 07:00:00 crc kubenswrapper[4784]: I0123 07:00:00.160620 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 07:00:00 crc kubenswrapper[4784]: I0123 07:00:00.160993 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 07:00:00 crc kubenswrapper[4784]: I0123 07:00:00.170745 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj"] Jan 23 07:00:00 crc kubenswrapper[4784]: I0123 07:00:00.258948 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-config-volume\") pod \"collect-profiles-29485860-5wsnj\" (UID: \"36556d0b-98ff-4f56-944e-a8d9c5baa9e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" Jan 23 07:00:00 crc kubenswrapper[4784]: I0123 07:00:00.259578 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-secret-volume\") pod \"collect-profiles-29485860-5wsnj\" (UID: \"36556d0b-98ff-4f56-944e-a8d9c5baa9e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" Jan 23 07:00:00 crc kubenswrapper[4784]: I0123 07:00:00.259627 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhjsb\" (UniqueName: \"kubernetes.io/projected/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-kube-api-access-qhjsb\") pod \"collect-profiles-29485860-5wsnj\" (UID: \"36556d0b-98ff-4f56-944e-a8d9c5baa9e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" Jan 23 07:00:00 crc kubenswrapper[4784]: I0123 07:00:00.363869 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-secret-volume\") pod \"collect-profiles-29485860-5wsnj\" (UID: \"36556d0b-98ff-4f56-944e-a8d9c5baa9e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" Jan 23 07:00:00 crc kubenswrapper[4784]: I0123 07:00:00.364965 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhjsb\" (UniqueName: \"kubernetes.io/projected/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-kube-api-access-qhjsb\") pod \"collect-profiles-29485860-5wsnj\" (UID: \"36556d0b-98ff-4f56-944e-a8d9c5baa9e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" Jan 23 07:00:00 crc kubenswrapper[4784]: I0123 07:00:00.365584 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-config-volume\") pod \"collect-profiles-29485860-5wsnj\" (UID: \"36556d0b-98ff-4f56-944e-a8d9c5baa9e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" Jan 23 07:00:00 crc kubenswrapper[4784]: I0123 07:00:00.366920 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-config-volume\") pod \"collect-profiles-29485860-5wsnj\" (UID: \"36556d0b-98ff-4f56-944e-a8d9c5baa9e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" Jan 23 07:00:00 crc kubenswrapper[4784]: I0123 07:00:00.372688 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-secret-volume\") pod \"collect-profiles-29485860-5wsnj\" (UID: \"36556d0b-98ff-4f56-944e-a8d9c5baa9e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" Jan 23 07:00:00 crc kubenswrapper[4784]: I0123 07:00:00.385678 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhjsb\" (UniqueName: \"kubernetes.io/projected/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-kube-api-access-qhjsb\") pod \"collect-profiles-29485860-5wsnj\" (UID: \"36556d0b-98ff-4f56-944e-a8d9c5baa9e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" Jan 23 07:00:00 crc kubenswrapper[4784]: I0123 07:00:00.525445 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" Jan 23 07:00:01 crc kubenswrapper[4784]: W0123 07:00:01.046246 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36556d0b_98ff_4f56_944e_a8d9c5baa9e0.slice/crio-ed60c5bbf4299f25a91ce1bcde5c93a24097c646febfa6355e619897d041e250 WatchSource:0}: Error finding container ed60c5bbf4299f25a91ce1bcde5c93a24097c646febfa6355e619897d041e250: Status 404 returned error can't find the container with id ed60c5bbf4299f25a91ce1bcde5c93a24097c646febfa6355e619897d041e250 Jan 23 07:00:01 crc kubenswrapper[4784]: I0123 07:00:01.046405 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj"] Jan 23 07:00:01 crc kubenswrapper[4784]: I0123 07:00:01.059368 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" event={"ID":"36556d0b-98ff-4f56-944e-a8d9c5baa9e0","Type":"ContainerStarted","Data":"ed60c5bbf4299f25a91ce1bcde5c93a24097c646febfa6355e619897d041e250"} Jan 23 07:00:02 crc kubenswrapper[4784]: I0123 07:00:02.075564 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" event={"ID":"36556d0b-98ff-4f56-944e-a8d9c5baa9e0","Type":"ContainerStarted","Data":"1c17c3c3ace1a389110c7e4f8cd8aaaf877ed6ca54466a04c37fa06226826dad"} Jan 23 07:00:03 crc kubenswrapper[4784]: I0123 07:00:03.089539 4784 generic.go:334] "Generic (PLEG): container finished" podID="36556d0b-98ff-4f56-944e-a8d9c5baa9e0" containerID="1c17c3c3ace1a389110c7e4f8cd8aaaf877ed6ca54466a04c37fa06226826dad" exitCode=0 Jan 23 07:00:03 crc kubenswrapper[4784]: I0123 07:00:03.089663 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" event={"ID":"36556d0b-98ff-4f56-944e-a8d9c5baa9e0","Type":"ContainerDied","Data":"1c17c3c3ace1a389110c7e4f8cd8aaaf877ed6ca54466a04c37fa06226826dad"} Jan 23 07:00:04 crc kubenswrapper[4784]: I0123 07:00:04.500088 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" Jan 23 07:00:04 crc kubenswrapper[4784]: I0123 07:00:04.676345 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhjsb\" (UniqueName: \"kubernetes.io/projected/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-kube-api-access-qhjsb\") pod \"36556d0b-98ff-4f56-944e-a8d9c5baa9e0\" (UID: \"36556d0b-98ff-4f56-944e-a8d9c5baa9e0\") " Jan 23 07:00:04 crc kubenswrapper[4784]: I0123 07:00:04.676653 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-secret-volume\") pod \"36556d0b-98ff-4f56-944e-a8d9c5baa9e0\" (UID: \"36556d0b-98ff-4f56-944e-a8d9c5baa9e0\") " Jan 23 07:00:04 crc kubenswrapper[4784]: I0123 07:00:04.676784 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-config-volume\") pod \"36556d0b-98ff-4f56-944e-a8d9c5baa9e0\" (UID: \"36556d0b-98ff-4f56-944e-a8d9c5baa9e0\") " Jan 23 07:00:04 crc kubenswrapper[4784]: I0123 07:00:04.678110 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-config-volume" (OuterVolumeSpecName: "config-volume") pod "36556d0b-98ff-4f56-944e-a8d9c5baa9e0" (UID: "36556d0b-98ff-4f56-944e-a8d9c5baa9e0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:00:04 crc kubenswrapper[4784]: I0123 07:00:04.683823 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-kube-api-access-qhjsb" (OuterVolumeSpecName: "kube-api-access-qhjsb") pod "36556d0b-98ff-4f56-944e-a8d9c5baa9e0" (UID: "36556d0b-98ff-4f56-944e-a8d9c5baa9e0"). InnerVolumeSpecName "kube-api-access-qhjsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:00:04 crc kubenswrapper[4784]: I0123 07:00:04.683844 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "36556d0b-98ff-4f56-944e-a8d9c5baa9e0" (UID: "36556d0b-98ff-4f56-944e-a8d9c5baa9e0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:00:04 crc kubenswrapper[4784]: I0123 07:00:04.780817 4784 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:00:04 crc kubenswrapper[4784]: I0123 07:00:04.781236 4784 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:00:04 crc kubenswrapper[4784]: I0123 07:00:04.781256 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhjsb\" (UniqueName: \"kubernetes.io/projected/36556d0b-98ff-4f56-944e-a8d9c5baa9e0-kube-api-access-qhjsb\") on node \"crc\" DevicePath \"\"" Jan 23 07:00:05 crc kubenswrapper[4784]: I0123 07:00:05.115658 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" event={"ID":"36556d0b-98ff-4f56-944e-a8d9c5baa9e0","Type":"ContainerDied","Data":"ed60c5bbf4299f25a91ce1bcde5c93a24097c646febfa6355e619897d041e250"} Jan 23 07:00:05 crc kubenswrapper[4784]: I0123 07:00:05.115715 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed60c5bbf4299f25a91ce1bcde5c93a24097c646febfa6355e619897d041e250" Jan 23 07:00:05 crc kubenswrapper[4784]: I0123 07:00:05.115855 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj" Jan 23 07:00:05 crc kubenswrapper[4784]: I0123 07:00:05.591811 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6"] Jan 23 07:00:05 crc kubenswrapper[4784]: I0123 07:00:05.604530 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485815-bhpt6"] Jan 23 07:00:07 crc kubenswrapper[4784]: I0123 07:00:07.254233 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:00:07 crc kubenswrapper[4784]: E0123 07:00:07.255148 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:00:07 crc kubenswrapper[4784]: I0123 07:00:07.271547 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd230e8a-2ec3-40e3-b964-66279c61bdfb" path="/var/lib/kubelet/pods/cd230e8a-2ec3-40e3-b964-66279c61bdfb/volumes" Jan 23 07:00:18 crc kubenswrapper[4784]: I0123 07:00:18.254361 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:00:18 crc kubenswrapper[4784]: E0123 07:00:18.255452 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:00:29 crc kubenswrapper[4784]: I0123 07:00:29.256433 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:00:29 crc kubenswrapper[4784]: E0123 07:00:29.257465 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:00:36 crc kubenswrapper[4784]: I0123 07:00:36.493348 4784 generic.go:334] "Generic (PLEG): container finished" podID="f9d1c448-c73e-4e10-8265-5c19080dc923" containerID="8a59f9ae87c840ec1a4b19f0af45e098cceade67268759f3ed50500597d95d3f" exitCode=0 Jan 23 07:00:36 crc kubenswrapper[4784]: I0123 07:00:36.493467 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" event={"ID":"f9d1c448-c73e-4e10-8265-5c19080dc923","Type":"ContainerDied","Data":"8a59f9ae87c840ec1a4b19f0af45e098cceade67268759f3ed50500597d95d3f"} Jan 23 07:00:37 crc kubenswrapper[4784]: I0123 07:00:37.564998 4784 scope.go:117] "RemoveContainer" containerID="b757aaa3edcd7f6f8f627810d78b3b4955df8395f248cd617074730f9fb0c596" Jan 23 07:00:37 crc kubenswrapper[4784]: I0123 07:00:37.998902 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.160738 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-ssh-key-openstack-edpm-ipam\") pod \"f9d1c448-c73e-4e10-8265-5c19080dc923\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.160854 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-neutron-metadata-combined-ca-bundle\") pod \"f9d1c448-c73e-4e10-8265-5c19080dc923\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.160987 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-nova-metadata-neutron-config-0\") pod \"f9d1c448-c73e-4e10-8265-5c19080dc923\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.161160 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-inventory\") pod \"f9d1c448-c73e-4e10-8265-5c19080dc923\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.161231 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-neutron-ovn-metadata-agent-neutron-config-0\") pod \"f9d1c448-c73e-4e10-8265-5c19080dc923\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.161482 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpssh\" (UniqueName: \"kubernetes.io/projected/f9d1c448-c73e-4e10-8265-5c19080dc923-kube-api-access-dpssh\") pod \"f9d1c448-c73e-4e10-8265-5c19080dc923\" (UID: \"f9d1c448-c73e-4e10-8265-5c19080dc923\") " Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.170264 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f9d1c448-c73e-4e10-8265-5c19080dc923" (UID: "f9d1c448-c73e-4e10-8265-5c19080dc923"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.170328 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9d1c448-c73e-4e10-8265-5c19080dc923-kube-api-access-dpssh" (OuterVolumeSpecName: "kube-api-access-dpssh") pod "f9d1c448-c73e-4e10-8265-5c19080dc923" (UID: "f9d1c448-c73e-4e10-8265-5c19080dc923"). InnerVolumeSpecName "kube-api-access-dpssh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.197801 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "f9d1c448-c73e-4e10-8265-5c19080dc923" (UID: "f9d1c448-c73e-4e10-8265-5c19080dc923"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.198884 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f9d1c448-c73e-4e10-8265-5c19080dc923" (UID: "f9d1c448-c73e-4e10-8265-5c19080dc923"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.201780 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "f9d1c448-c73e-4e10-8265-5c19080dc923" (UID: "f9d1c448-c73e-4e10-8265-5c19080dc923"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.205100 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-inventory" (OuterVolumeSpecName: "inventory") pod "f9d1c448-c73e-4e10-8265-5c19080dc923" (UID: "f9d1c448-c73e-4e10-8265-5c19080dc923"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.264735 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpssh\" (UniqueName: \"kubernetes.io/projected/f9d1c448-c73e-4e10-8265-5c19080dc923-kube-api-access-dpssh\") on node \"crc\" DevicePath \"\"" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.265144 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.265247 4784 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.265329 4784 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.265410 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.265495 4784 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f9d1c448-c73e-4e10-8265-5c19080dc923-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.517516 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" event={"ID":"f9d1c448-c73e-4e10-8265-5c19080dc923","Type":"ContainerDied","Data":"bde1feae3b1a5deae68215b624368f27dfc8be9b3e078b9def8f58cba972317c"} Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.517589 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bde1feae3b1a5deae68215b624368f27dfc8be9b3e078b9def8f58cba972317c" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.518179 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.737543 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw"] Jan 23 07:00:38 crc kubenswrapper[4784]: E0123 07:00:38.738108 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36556d0b-98ff-4f56-944e-a8d9c5baa9e0" containerName="collect-profiles" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.738136 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="36556d0b-98ff-4f56-944e-a8d9c5baa9e0" containerName="collect-profiles" Jan 23 07:00:38 crc kubenswrapper[4784]: E0123 07:00:38.738169 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d1c448-c73e-4e10-8265-5c19080dc923" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.738179 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d1c448-c73e-4e10-8265-5c19080dc923" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.738466 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d1c448-c73e-4e10-8265-5c19080dc923" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.738497 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="36556d0b-98ff-4f56-944e-a8d9c5baa9e0" containerName="collect-profiles" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.739335 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.742710 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.743260 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.743670 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.744328 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.750296 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.756467 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw"] Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.786623 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.787125 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d89g\" (UniqueName: \"kubernetes.io/projected/fe48ab60-daab-4f78-8276-76ddc1745644-kube-api-access-2d89g\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.787355 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.787504 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.787653 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.897333 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.897445 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d89g\" (UniqueName: \"kubernetes.io/projected/fe48ab60-daab-4f78-8276-76ddc1745644-kube-api-access-2d89g\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.897541 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.897594 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.897635 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.923217 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.930088 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.932450 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d89g\" (UniqueName: \"kubernetes.io/projected/fe48ab60-daab-4f78-8276-76ddc1745644-kube-api-access-2d89g\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.932582 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:38 crc kubenswrapper[4784]: I0123 07:00:38.933250 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:39 crc kubenswrapper[4784]: I0123 07:00:39.063886 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:00:39 crc kubenswrapper[4784]: I0123 07:00:39.667352 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw"] Jan 23 07:00:40 crc kubenswrapper[4784]: I0123 07:00:40.612766 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" event={"ID":"fe48ab60-daab-4f78-8276-76ddc1745644","Type":"ContainerStarted","Data":"ceddc7bdc7ee08cb5c7fba28123746be2dd6b468d2b34f9a7f58ccf2c4453d0e"} Jan 23 07:00:40 crc kubenswrapper[4784]: I0123 07:00:40.613163 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" event={"ID":"fe48ab60-daab-4f78-8276-76ddc1745644","Type":"ContainerStarted","Data":"98f6d9bb12d483de6540cba7f87575efe4a783da9e05700befa90e62964fe373"} Jan 23 07:00:41 crc kubenswrapper[4784]: I0123 07:00:41.645550 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" podStartSLOduration=3.075101495 podStartE2EDuration="3.645513895s" podCreationTimestamp="2026-01-23 07:00:38 +0000 UTC" firstStartedPulling="2026-01-23 07:00:39.679279954 +0000 UTC m=+2442.911787928" lastFinishedPulling="2026-01-23 07:00:40.249692354 +0000 UTC m=+2443.482200328" observedRunningTime="2026-01-23 07:00:41.642681545 +0000 UTC m=+2444.875189529" watchObservedRunningTime="2026-01-23 07:00:41.645513895 +0000 UTC m=+2444.878021869" Jan 23 07:00:44 crc kubenswrapper[4784]: I0123 07:00:44.254257 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:00:44 crc kubenswrapper[4784]: E0123 07:00:44.255197 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:00:55 crc kubenswrapper[4784]: I0123 07:00:55.254689 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:00:55 crc kubenswrapper[4784]: E0123 07:00:55.255919 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.152024 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29485861-48jdw"] Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.196832 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29485861-48jdw"] Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.196957 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.401169 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-fernet-keys\") pod \"keystone-cron-29485861-48jdw\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.403247 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgnx6\" (UniqueName: \"kubernetes.io/projected/b2e51175-c98c-49a9-ac8b-511b91913b99-kube-api-access-lgnx6\") pod \"keystone-cron-29485861-48jdw\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.403492 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-config-data\") pod \"keystone-cron-29485861-48jdw\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.404066 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-combined-ca-bundle\") pod \"keystone-cron-29485861-48jdw\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.506125 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgnx6\" (UniqueName: \"kubernetes.io/projected/b2e51175-c98c-49a9-ac8b-511b91913b99-kube-api-access-lgnx6\") pod \"keystone-cron-29485861-48jdw\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.506188 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-config-data\") pod \"keystone-cron-29485861-48jdw\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.506240 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-combined-ca-bundle\") pod \"keystone-cron-29485861-48jdw\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.506289 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-fernet-keys\") pod \"keystone-cron-29485861-48jdw\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.514173 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-config-data\") pod \"keystone-cron-29485861-48jdw\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.514769 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-fernet-keys\") pod \"keystone-cron-29485861-48jdw\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.514695 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-combined-ca-bundle\") pod \"keystone-cron-29485861-48jdw\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.546657 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgnx6\" (UniqueName: \"kubernetes.io/projected/b2e51175-c98c-49a9-ac8b-511b91913b99-kube-api-access-lgnx6\") pod \"keystone-cron-29485861-48jdw\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:00 crc kubenswrapper[4784]: I0123 07:01:00.831684 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:01 crc kubenswrapper[4784]: I0123 07:01:01.416785 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29485861-48jdw"] Jan 23 07:01:01 crc kubenswrapper[4784]: I0123 07:01:01.869636 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485861-48jdw" event={"ID":"b2e51175-c98c-49a9-ac8b-511b91913b99","Type":"ContainerStarted","Data":"71469cc01510913b3a6afdb45b3b6735d01e97be7a7c8f35207910cc6afb5934"} Jan 23 07:01:01 crc kubenswrapper[4784]: I0123 07:01:01.870597 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485861-48jdw" event={"ID":"b2e51175-c98c-49a9-ac8b-511b91913b99","Type":"ContainerStarted","Data":"f3c0c677e9ba8bfdf6d7797025ff8bfacf162e261aa2d26e4ea3102b7a7d2fd8"} Jan 23 07:01:01 crc kubenswrapper[4784]: I0123 07:01:01.903698 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29485861-48jdw" podStartSLOduration=1.903621823 podStartE2EDuration="1.903621823s" podCreationTimestamp="2026-01-23 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:01:01.892586181 +0000 UTC m=+2465.125094155" watchObservedRunningTime="2026-01-23 07:01:01.903621823 +0000 UTC m=+2465.136129797" Jan 23 07:01:04 crc kubenswrapper[4784]: I0123 07:01:04.907489 4784 generic.go:334] "Generic (PLEG): container finished" podID="b2e51175-c98c-49a9-ac8b-511b91913b99" containerID="71469cc01510913b3a6afdb45b3b6735d01e97be7a7c8f35207910cc6afb5934" exitCode=0 Jan 23 07:01:04 crc kubenswrapper[4784]: I0123 07:01:04.907605 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485861-48jdw" event={"ID":"b2e51175-c98c-49a9-ac8b-511b91913b99","Type":"ContainerDied","Data":"71469cc01510913b3a6afdb45b3b6735d01e97be7a7c8f35207910cc6afb5934"} Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.295496 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.363653 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-config-data\") pod \"b2e51175-c98c-49a9-ac8b-511b91913b99\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.363953 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-fernet-keys\") pod \"b2e51175-c98c-49a9-ac8b-511b91913b99\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.364131 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-combined-ca-bundle\") pod \"b2e51175-c98c-49a9-ac8b-511b91913b99\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.364371 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgnx6\" (UniqueName: \"kubernetes.io/projected/b2e51175-c98c-49a9-ac8b-511b91913b99-kube-api-access-lgnx6\") pod \"b2e51175-c98c-49a9-ac8b-511b91913b99\" (UID: \"b2e51175-c98c-49a9-ac8b-511b91913b99\") " Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.375835 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2e51175-c98c-49a9-ac8b-511b91913b99-kube-api-access-lgnx6" (OuterVolumeSpecName: "kube-api-access-lgnx6") pod "b2e51175-c98c-49a9-ac8b-511b91913b99" (UID: "b2e51175-c98c-49a9-ac8b-511b91913b99"). InnerVolumeSpecName "kube-api-access-lgnx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.381448 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b2e51175-c98c-49a9-ac8b-511b91913b99" (UID: "b2e51175-c98c-49a9-ac8b-511b91913b99"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.408467 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2e51175-c98c-49a9-ac8b-511b91913b99" (UID: "b2e51175-c98c-49a9-ac8b-511b91913b99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.437797 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-config-data" (OuterVolumeSpecName: "config-data") pod "b2e51175-c98c-49a9-ac8b-511b91913b99" (UID: "b2e51175-c98c-49a9-ac8b-511b91913b99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.471340 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.471410 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgnx6\" (UniqueName: \"kubernetes.io/projected/b2e51175-c98c-49a9-ac8b-511b91913b99-kube-api-access-lgnx6\") on node \"crc\" DevicePath \"\"" Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.471430 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.471447 4784 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2e51175-c98c-49a9-ac8b-511b91913b99-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.932101 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485861-48jdw" event={"ID":"b2e51175-c98c-49a9-ac8b-511b91913b99","Type":"ContainerDied","Data":"f3c0c677e9ba8bfdf6d7797025ff8bfacf162e261aa2d26e4ea3102b7a7d2fd8"} Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.932219 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3c0c677e9ba8bfdf6d7797025ff8bfacf162e261aa2d26e4ea3102b7a7d2fd8" Jan 23 07:01:06 crc kubenswrapper[4784]: I0123 07:01:06.932308 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485861-48jdw" Jan 23 07:01:07 crc kubenswrapper[4784]: I0123 07:01:07.262237 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:01:07 crc kubenswrapper[4784]: E0123 07:01:07.262714 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:01:22 crc kubenswrapper[4784]: I0123 07:01:22.254040 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:01:22 crc kubenswrapper[4784]: E0123 07:01:22.255394 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:01:26 crc kubenswrapper[4784]: I0123 07:01:26.461119 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" podUID="7c5e978b-ac3c-439e-b2b1-ab025c130984" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 07:01:33 crc kubenswrapper[4784]: I0123 07:01:33.253882 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:01:33 crc kubenswrapper[4784]: E0123 07:01:33.255020 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:01:48 crc kubenswrapper[4784]: I0123 07:01:48.254461 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:01:48 crc kubenswrapper[4784]: E0123 07:01:48.255658 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:02:00 crc kubenswrapper[4784]: I0123 07:02:00.254624 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:02:00 crc kubenswrapper[4784]: E0123 07:02:00.255640 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:02:15 crc kubenswrapper[4784]: I0123 07:02:15.256291 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:02:15 crc kubenswrapper[4784]: E0123 07:02:15.257844 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:02:28 crc kubenswrapper[4784]: I0123 07:02:28.256631 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:02:28 crc kubenswrapper[4784]: E0123 07:02:28.257939 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:02:43 crc kubenswrapper[4784]: I0123 07:02:43.253990 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:02:43 crc kubenswrapper[4784]: E0123 07:02:43.255060 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:02:58 crc kubenswrapper[4784]: I0123 07:02:58.254720 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:02:58 crc kubenswrapper[4784]: E0123 07:02:58.257527 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:03:09 crc kubenswrapper[4784]: I0123 07:03:09.253588 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:03:09 crc kubenswrapper[4784]: E0123 07:03:09.254571 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:03:23 crc kubenswrapper[4784]: I0123 07:03:23.254189 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:03:23 crc kubenswrapper[4784]: E0123 07:03:23.255317 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:03:32 crc kubenswrapper[4784]: I0123 07:03:32.579371 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r49mp"] Jan 23 07:03:32 crc kubenswrapper[4784]: E0123 07:03:32.581188 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e51175-c98c-49a9-ac8b-511b91913b99" containerName="keystone-cron" Jan 23 07:03:32 crc kubenswrapper[4784]: I0123 07:03:32.581213 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e51175-c98c-49a9-ac8b-511b91913b99" containerName="keystone-cron" Jan 23 07:03:32 crc kubenswrapper[4784]: I0123 07:03:32.581543 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e51175-c98c-49a9-ac8b-511b91913b99" containerName="keystone-cron" Jan 23 07:03:32 crc kubenswrapper[4784]: I0123 07:03:32.586324 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:32 crc kubenswrapper[4784]: I0123 07:03:32.589487 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r49mp"] Jan 23 07:03:32 crc kubenswrapper[4784]: I0123 07:03:32.672480 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72gtl\" (UniqueName: \"kubernetes.io/projected/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-kube-api-access-72gtl\") pod \"redhat-marketplace-r49mp\" (UID: \"6e095451-f0f6-4b97-a430-9cbb8c96fbd5\") " pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:32 crc kubenswrapper[4784]: I0123 07:03:32.672559 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-catalog-content\") pod \"redhat-marketplace-r49mp\" (UID: \"6e095451-f0f6-4b97-a430-9cbb8c96fbd5\") " pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:32 crc kubenswrapper[4784]: I0123 07:03:32.672586 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-utilities\") pod \"redhat-marketplace-r49mp\" (UID: \"6e095451-f0f6-4b97-a430-9cbb8c96fbd5\") " pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:32 crc kubenswrapper[4784]: I0123 07:03:32.776167 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72gtl\" (UniqueName: \"kubernetes.io/projected/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-kube-api-access-72gtl\") pod \"redhat-marketplace-r49mp\" (UID: \"6e095451-f0f6-4b97-a430-9cbb8c96fbd5\") " pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:32 crc kubenswrapper[4784]: I0123 07:03:32.776247 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-catalog-content\") pod \"redhat-marketplace-r49mp\" (UID: \"6e095451-f0f6-4b97-a430-9cbb8c96fbd5\") " pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:32 crc kubenswrapper[4784]: I0123 07:03:32.776283 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-utilities\") pod \"redhat-marketplace-r49mp\" (UID: \"6e095451-f0f6-4b97-a430-9cbb8c96fbd5\") " pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:32 crc kubenswrapper[4784]: I0123 07:03:32.777003 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-utilities\") pod \"redhat-marketplace-r49mp\" (UID: \"6e095451-f0f6-4b97-a430-9cbb8c96fbd5\") " pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:32 crc kubenswrapper[4784]: I0123 07:03:32.777350 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-catalog-content\") pod \"redhat-marketplace-r49mp\" (UID: \"6e095451-f0f6-4b97-a430-9cbb8c96fbd5\") " pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:32 crc kubenswrapper[4784]: I0123 07:03:32.801726 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72gtl\" (UniqueName: \"kubernetes.io/projected/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-kube-api-access-72gtl\") pod \"redhat-marketplace-r49mp\" (UID: \"6e095451-f0f6-4b97-a430-9cbb8c96fbd5\") " pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:32 crc kubenswrapper[4784]: I0123 07:03:32.922549 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:33 crc kubenswrapper[4784]: W0123 07:03:33.439431 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e095451_f0f6_4b97_a430_9cbb8c96fbd5.slice/crio-9794a4c02360cd7cd5f3fade66a4a645dd83f5ab24aafcc3be7a4b6ad43f2ba6 WatchSource:0}: Error finding container 9794a4c02360cd7cd5f3fade66a4a645dd83f5ab24aafcc3be7a4b6ad43f2ba6: Status 404 returned error can't find the container with id 9794a4c02360cd7cd5f3fade66a4a645dd83f5ab24aafcc3be7a4b6ad43f2ba6 Jan 23 07:03:33 crc kubenswrapper[4784]: I0123 07:03:33.445201 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r49mp"] Jan 23 07:03:33 crc kubenswrapper[4784]: I0123 07:03:33.664365 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r49mp" event={"ID":"6e095451-f0f6-4b97-a430-9cbb8c96fbd5","Type":"ContainerStarted","Data":"9794a4c02360cd7cd5f3fade66a4a645dd83f5ab24aafcc3be7a4b6ad43f2ba6"} Jan 23 07:03:34 crc kubenswrapper[4784]: I0123 07:03:34.255365 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:03:34 crc kubenswrapper[4784]: E0123 07:03:34.255920 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:03:34 crc kubenswrapper[4784]: I0123 07:03:34.678728 4784 generic.go:334] "Generic (PLEG): container finished" podID="6e095451-f0f6-4b97-a430-9cbb8c96fbd5" containerID="ecf35c4372e72413ded1d56f34e5d8999624c8a3ce22c7eb6cb6ea586b8ea9ef" exitCode=0 Jan 23 07:03:34 crc kubenswrapper[4784]: I0123 07:03:34.678927 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r49mp" event={"ID":"6e095451-f0f6-4b97-a430-9cbb8c96fbd5","Type":"ContainerDied","Data":"ecf35c4372e72413ded1d56f34e5d8999624c8a3ce22c7eb6cb6ea586b8ea9ef"} Jan 23 07:03:36 crc kubenswrapper[4784]: I0123 07:03:36.712323 4784 generic.go:334] "Generic (PLEG): container finished" podID="6e095451-f0f6-4b97-a430-9cbb8c96fbd5" containerID="5f6dbac0747da9750f64b9cc016b089bb33317e9f00d0af43c717f33240be315" exitCode=0 Jan 23 07:03:36 crc kubenswrapper[4784]: I0123 07:03:36.712408 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r49mp" event={"ID":"6e095451-f0f6-4b97-a430-9cbb8c96fbd5","Type":"ContainerDied","Data":"5f6dbac0747da9750f64b9cc016b089bb33317e9f00d0af43c717f33240be315"} Jan 23 07:03:37 crc kubenswrapper[4784]: I0123 07:03:37.733888 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r49mp" event={"ID":"6e095451-f0f6-4b97-a430-9cbb8c96fbd5","Type":"ContainerStarted","Data":"69dd08a4df94c00436e657410e08efbc2ad5fa3d6befe0709f6ca8293d11a178"} Jan 23 07:03:37 crc kubenswrapper[4784]: I0123 07:03:37.773806 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r49mp" podStartSLOduration=3.238203513 podStartE2EDuration="5.773776108s" podCreationTimestamp="2026-01-23 07:03:32 +0000 UTC" firstStartedPulling="2026-01-23 07:03:34.68309686 +0000 UTC m=+2617.915604864" lastFinishedPulling="2026-01-23 07:03:37.218669485 +0000 UTC m=+2620.451177459" observedRunningTime="2026-01-23 07:03:37.758622114 +0000 UTC m=+2620.991130078" watchObservedRunningTime="2026-01-23 07:03:37.773776108 +0000 UTC m=+2621.006284082" Jan 23 07:03:42 crc kubenswrapper[4784]: I0123 07:03:42.923910 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:42 crc kubenswrapper[4784]: I0123 07:03:42.924808 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:42 crc kubenswrapper[4784]: I0123 07:03:42.984562 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:43 crc kubenswrapper[4784]: I0123 07:03:43.908434 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:43 crc kubenswrapper[4784]: I0123 07:03:43.978819 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r49mp"] Jan 23 07:03:45 crc kubenswrapper[4784]: I0123 07:03:45.253910 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:03:45 crc kubenswrapper[4784]: E0123 07:03:45.254694 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:03:45 crc kubenswrapper[4784]: I0123 07:03:45.885742 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r49mp" podUID="6e095451-f0f6-4b97-a430-9cbb8c96fbd5" containerName="registry-server" containerID="cri-o://69dd08a4df94c00436e657410e08efbc2ad5fa3d6befe0709f6ca8293d11a178" gracePeriod=2 Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.451503 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.489712 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72gtl\" (UniqueName: \"kubernetes.io/projected/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-kube-api-access-72gtl\") pod \"6e095451-f0f6-4b97-a430-9cbb8c96fbd5\" (UID: \"6e095451-f0f6-4b97-a430-9cbb8c96fbd5\") " Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.490336 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-utilities\") pod \"6e095451-f0f6-4b97-a430-9cbb8c96fbd5\" (UID: \"6e095451-f0f6-4b97-a430-9cbb8c96fbd5\") " Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.490369 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-catalog-content\") pod \"6e095451-f0f6-4b97-a430-9cbb8c96fbd5\" (UID: \"6e095451-f0f6-4b97-a430-9cbb8c96fbd5\") " Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.491859 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-utilities" (OuterVolumeSpecName: "utilities") pod "6e095451-f0f6-4b97-a430-9cbb8c96fbd5" (UID: "6e095451-f0f6-4b97-a430-9cbb8c96fbd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.515236 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-kube-api-access-72gtl" (OuterVolumeSpecName: "kube-api-access-72gtl") pod "6e095451-f0f6-4b97-a430-9cbb8c96fbd5" (UID: "6e095451-f0f6-4b97-a430-9cbb8c96fbd5"). InnerVolumeSpecName "kube-api-access-72gtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.535015 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e095451-f0f6-4b97-a430-9cbb8c96fbd5" (UID: "6e095451-f0f6-4b97-a430-9cbb8c96fbd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.593778 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72gtl\" (UniqueName: \"kubernetes.io/projected/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-kube-api-access-72gtl\") on node \"crc\" DevicePath \"\"" Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.593827 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.593842 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e095451-f0f6-4b97-a430-9cbb8c96fbd5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.901781 4784 generic.go:334] "Generic (PLEG): container finished" podID="6e095451-f0f6-4b97-a430-9cbb8c96fbd5" containerID="69dd08a4df94c00436e657410e08efbc2ad5fa3d6befe0709f6ca8293d11a178" exitCode=0 Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.901840 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r49mp" event={"ID":"6e095451-f0f6-4b97-a430-9cbb8c96fbd5","Type":"ContainerDied","Data":"69dd08a4df94c00436e657410e08efbc2ad5fa3d6befe0709f6ca8293d11a178"} Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.901877 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r49mp" event={"ID":"6e095451-f0f6-4b97-a430-9cbb8c96fbd5","Type":"ContainerDied","Data":"9794a4c02360cd7cd5f3fade66a4a645dd83f5ab24aafcc3be7a4b6ad43f2ba6"} Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.901897 4784 scope.go:117] "RemoveContainer" containerID="69dd08a4df94c00436e657410e08efbc2ad5fa3d6befe0709f6ca8293d11a178" Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.901931 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r49mp" Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.953204 4784 scope.go:117] "RemoveContainer" containerID="5f6dbac0747da9750f64b9cc016b089bb33317e9f00d0af43c717f33240be315" Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.969737 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r49mp"] Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.985624 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r49mp"] Jan 23 07:03:46 crc kubenswrapper[4784]: I0123 07:03:46.988712 4784 scope.go:117] "RemoveContainer" containerID="ecf35c4372e72413ded1d56f34e5d8999624c8a3ce22c7eb6cb6ea586b8ea9ef" Jan 23 07:03:47 crc kubenswrapper[4784]: I0123 07:03:47.029714 4784 scope.go:117] "RemoveContainer" containerID="69dd08a4df94c00436e657410e08efbc2ad5fa3d6befe0709f6ca8293d11a178" Jan 23 07:03:47 crc kubenswrapper[4784]: E0123 07:03:47.030858 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69dd08a4df94c00436e657410e08efbc2ad5fa3d6befe0709f6ca8293d11a178\": container with ID starting with 69dd08a4df94c00436e657410e08efbc2ad5fa3d6befe0709f6ca8293d11a178 not found: ID does not exist" containerID="69dd08a4df94c00436e657410e08efbc2ad5fa3d6befe0709f6ca8293d11a178" Jan 23 07:03:47 crc kubenswrapper[4784]: I0123 07:03:47.030903 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69dd08a4df94c00436e657410e08efbc2ad5fa3d6befe0709f6ca8293d11a178"} err="failed to get container status \"69dd08a4df94c00436e657410e08efbc2ad5fa3d6befe0709f6ca8293d11a178\": rpc error: code = NotFound desc = could not find container \"69dd08a4df94c00436e657410e08efbc2ad5fa3d6befe0709f6ca8293d11a178\": container with ID starting with 69dd08a4df94c00436e657410e08efbc2ad5fa3d6befe0709f6ca8293d11a178 not found: ID does not exist" Jan 23 07:03:47 crc kubenswrapper[4784]: I0123 07:03:47.030942 4784 scope.go:117] "RemoveContainer" containerID="5f6dbac0747da9750f64b9cc016b089bb33317e9f00d0af43c717f33240be315" Jan 23 07:03:47 crc kubenswrapper[4784]: E0123 07:03:47.031226 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f6dbac0747da9750f64b9cc016b089bb33317e9f00d0af43c717f33240be315\": container with ID starting with 5f6dbac0747da9750f64b9cc016b089bb33317e9f00d0af43c717f33240be315 not found: ID does not exist" containerID="5f6dbac0747da9750f64b9cc016b089bb33317e9f00d0af43c717f33240be315" Jan 23 07:03:47 crc kubenswrapper[4784]: I0123 07:03:47.031265 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f6dbac0747da9750f64b9cc016b089bb33317e9f00d0af43c717f33240be315"} err="failed to get container status \"5f6dbac0747da9750f64b9cc016b089bb33317e9f00d0af43c717f33240be315\": rpc error: code = NotFound desc = could not find container \"5f6dbac0747da9750f64b9cc016b089bb33317e9f00d0af43c717f33240be315\": container with ID starting with 5f6dbac0747da9750f64b9cc016b089bb33317e9f00d0af43c717f33240be315 not found: ID does not exist" Jan 23 07:03:47 crc kubenswrapper[4784]: I0123 07:03:47.031284 4784 scope.go:117] "RemoveContainer" containerID="ecf35c4372e72413ded1d56f34e5d8999624c8a3ce22c7eb6cb6ea586b8ea9ef" Jan 23 07:03:47 crc kubenswrapper[4784]: E0123 07:03:47.031684 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecf35c4372e72413ded1d56f34e5d8999624c8a3ce22c7eb6cb6ea586b8ea9ef\": container with ID starting with ecf35c4372e72413ded1d56f34e5d8999624c8a3ce22c7eb6cb6ea586b8ea9ef not found: ID does not exist" containerID="ecf35c4372e72413ded1d56f34e5d8999624c8a3ce22c7eb6cb6ea586b8ea9ef" Jan 23 07:03:47 crc kubenswrapper[4784]: I0123 07:03:47.031774 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf35c4372e72413ded1d56f34e5d8999624c8a3ce22c7eb6cb6ea586b8ea9ef"} err="failed to get container status \"ecf35c4372e72413ded1d56f34e5d8999624c8a3ce22c7eb6cb6ea586b8ea9ef\": rpc error: code = NotFound desc = could not find container \"ecf35c4372e72413ded1d56f34e5d8999624c8a3ce22c7eb6cb6ea586b8ea9ef\": container with ID starting with ecf35c4372e72413ded1d56f34e5d8999624c8a3ce22c7eb6cb6ea586b8ea9ef not found: ID does not exist" Jan 23 07:03:47 crc kubenswrapper[4784]: I0123 07:03:47.288566 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e095451-f0f6-4b97-a430-9cbb8c96fbd5" path="/var/lib/kubelet/pods/6e095451-f0f6-4b97-a430-9cbb8c96fbd5/volumes" Jan 23 07:03:56 crc kubenswrapper[4784]: I0123 07:03:56.255349 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:03:57 crc kubenswrapper[4784]: I0123 07:03:57.016299 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"0d4a9266cdf4255989d2905856783a386bc690d122ea96239ac56dbeae043011"} Jan 23 07:05:33 crc kubenswrapper[4784]: I0123 07:05:33.226322 4784 generic.go:334] "Generic (PLEG): container finished" podID="fe48ab60-daab-4f78-8276-76ddc1745644" containerID="ceddc7bdc7ee08cb5c7fba28123746be2dd6b468d2b34f9a7f58ccf2c4453d0e" exitCode=0 Jan 23 07:05:33 crc kubenswrapper[4784]: I0123 07:05:33.226460 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" event={"ID":"fe48ab60-daab-4f78-8276-76ddc1745644","Type":"ContainerDied","Data":"ceddc7bdc7ee08cb5c7fba28123746be2dd6b468d2b34f9a7f58ccf2c4453d0e"} Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.692366 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.853856 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-inventory\") pod \"fe48ab60-daab-4f78-8276-76ddc1745644\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.854162 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-libvirt-secret-0\") pod \"fe48ab60-daab-4f78-8276-76ddc1745644\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.854274 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-ssh-key-openstack-edpm-ipam\") pod \"fe48ab60-daab-4f78-8276-76ddc1745644\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.854444 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-libvirt-combined-ca-bundle\") pod \"fe48ab60-daab-4f78-8276-76ddc1745644\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.854552 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d89g\" (UniqueName: \"kubernetes.io/projected/fe48ab60-daab-4f78-8276-76ddc1745644-kube-api-access-2d89g\") pod \"fe48ab60-daab-4f78-8276-76ddc1745644\" (UID: \"fe48ab60-daab-4f78-8276-76ddc1745644\") " Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.860209 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "fe48ab60-daab-4f78-8276-76ddc1745644" (UID: "fe48ab60-daab-4f78-8276-76ddc1745644"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.875048 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe48ab60-daab-4f78-8276-76ddc1745644-kube-api-access-2d89g" (OuterVolumeSpecName: "kube-api-access-2d89g") pod "fe48ab60-daab-4f78-8276-76ddc1745644" (UID: "fe48ab60-daab-4f78-8276-76ddc1745644"). InnerVolumeSpecName "kube-api-access-2d89g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.885568 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "fe48ab60-daab-4f78-8276-76ddc1745644" (UID: "fe48ab60-daab-4f78-8276-76ddc1745644"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.886941 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fe48ab60-daab-4f78-8276-76ddc1745644" (UID: "fe48ab60-daab-4f78-8276-76ddc1745644"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.911513 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-inventory" (OuterVolumeSpecName: "inventory") pod "fe48ab60-daab-4f78-8276-76ddc1745644" (UID: "fe48ab60-daab-4f78-8276-76ddc1745644"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.957382 4784 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.957424 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.957437 4784 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.957446 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d89g\" (UniqueName: \"kubernetes.io/projected/fe48ab60-daab-4f78-8276-76ddc1745644-kube-api-access-2d89g\") on node \"crc\" DevicePath \"\"" Jan 23 07:05:34 crc kubenswrapper[4784]: I0123 07:05:34.957458 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe48ab60-daab-4f78-8276-76ddc1745644-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.265859 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.286218 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw" event={"ID":"fe48ab60-daab-4f78-8276-76ddc1745644","Type":"ContainerDied","Data":"98f6d9bb12d483de6540cba7f87575efe4a783da9e05700befa90e62964fe373"} Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.286289 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98f6d9bb12d483de6540cba7f87575efe4a783da9e05700befa90e62964fe373" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.407153 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s"] Jan 23 07:05:35 crc kubenswrapper[4784]: E0123 07:05:35.407818 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e095451-f0f6-4b97-a430-9cbb8c96fbd5" containerName="extract-utilities" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.407856 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e095451-f0f6-4b97-a430-9cbb8c96fbd5" containerName="extract-utilities" Jan 23 07:05:35 crc kubenswrapper[4784]: E0123 07:05:35.407883 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe48ab60-daab-4f78-8276-76ddc1745644" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.407894 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe48ab60-daab-4f78-8276-76ddc1745644" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 07:05:35 crc kubenswrapper[4784]: E0123 07:05:35.407907 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e095451-f0f6-4b97-a430-9cbb8c96fbd5" containerName="registry-server" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.407915 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e095451-f0f6-4b97-a430-9cbb8c96fbd5" containerName="registry-server" Jan 23 07:05:35 crc kubenswrapper[4784]: E0123 07:05:35.407926 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e095451-f0f6-4b97-a430-9cbb8c96fbd5" containerName="extract-content" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.407947 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e095451-f0f6-4b97-a430-9cbb8c96fbd5" containerName="extract-content" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.408247 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e095451-f0f6-4b97-a430-9cbb8c96fbd5" containerName="registry-server" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.408277 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe48ab60-daab-4f78-8276-76ddc1745644" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.409598 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.415127 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.415142 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.415550 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.416411 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.416710 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.417659 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.421889 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.435112 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s"] Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.574341 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.574434 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.574911 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.574966 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78c26\" (UniqueName: \"kubernetes.io/projected/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-kube-api-access-78c26\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.574999 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.575043 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.575216 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.575281 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.575312 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.677181 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.677250 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.677288 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.677346 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.677403 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.677874 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.677908 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78c26\" (UniqueName: \"kubernetes.io/projected/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-kube-api-access-78c26\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.677935 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.677964 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.678798 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.683592 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.684287 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.685315 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.685440 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.687288 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.692471 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.692592 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.696323 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78c26\" (UniqueName: \"kubernetes.io/projected/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-kube-api-access-78c26\") pod \"nova-edpm-deployment-openstack-edpm-ipam-v8n5s\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:35 crc kubenswrapper[4784]: I0123 07:05:35.746373 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:05:36 crc kubenswrapper[4784]: I0123 07:05:36.311241 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s"] Jan 23 07:05:36 crc kubenswrapper[4784]: I0123 07:05:36.318120 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:05:37 crc kubenswrapper[4784]: I0123 07:05:37.297234 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" event={"ID":"434b967e-b70f-4fae-9cec-5c7f6b78c5d2","Type":"ContainerStarted","Data":"bf8168c6b5f93bf6183d5e0f0c5d5dfe2fd73201163ade138c272070c20ff651"} Jan 23 07:05:37 crc kubenswrapper[4784]: I0123 07:05:37.297831 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" event={"ID":"434b967e-b70f-4fae-9cec-5c7f6b78c5d2","Type":"ContainerStarted","Data":"7a0186c6ac036c92b4196e0192607cbe78b3232449d0ba2679e0c0590fd86675"} Jan 23 07:05:37 crc kubenswrapper[4784]: I0123 07:05:37.327493 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" podStartSLOduration=1.887763458 podStartE2EDuration="2.327464434s" podCreationTimestamp="2026-01-23 07:05:35 +0000 UTC" firstStartedPulling="2026-01-23 07:05:36.317604214 +0000 UTC m=+2739.550112228" lastFinishedPulling="2026-01-23 07:05:36.75730518 +0000 UTC m=+2739.989813204" observedRunningTime="2026-01-23 07:05:37.32285839 +0000 UTC m=+2740.555366384" watchObservedRunningTime="2026-01-23 07:05:37.327464434 +0000 UTC m=+2740.559972408" Jan 23 07:06:19 crc kubenswrapper[4784]: I0123 07:06:19.929110 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-24lx5"] Jan 23 07:06:19 crc kubenswrapper[4784]: I0123 07:06:19.933346 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:19 crc kubenswrapper[4784]: I0123 07:06:19.962163 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-24lx5"] Jan 23 07:06:19 crc kubenswrapper[4784]: I0123 07:06:19.975832 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-catalog-content\") pod \"certified-operators-24lx5\" (UID: \"0f46e40e-ddf3-4dc8-965f-403586d1dc5c\") " pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:19 crc kubenswrapper[4784]: I0123 07:06:19.975927 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfzbp\" (UniqueName: \"kubernetes.io/projected/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-kube-api-access-kfzbp\") pod \"certified-operators-24lx5\" (UID: \"0f46e40e-ddf3-4dc8-965f-403586d1dc5c\") " pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:19 crc kubenswrapper[4784]: I0123 07:06:19.975962 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-utilities\") pod \"certified-operators-24lx5\" (UID: \"0f46e40e-ddf3-4dc8-965f-403586d1dc5c\") " pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:20 crc kubenswrapper[4784]: I0123 07:06:20.092995 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-catalog-content\") pod \"certified-operators-24lx5\" (UID: \"0f46e40e-ddf3-4dc8-965f-403586d1dc5c\") " pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:20 crc kubenswrapper[4784]: I0123 07:06:20.093074 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfzbp\" (UniqueName: \"kubernetes.io/projected/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-kube-api-access-kfzbp\") pod \"certified-operators-24lx5\" (UID: \"0f46e40e-ddf3-4dc8-965f-403586d1dc5c\") " pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:20 crc kubenswrapper[4784]: I0123 07:06:20.093106 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-utilities\") pod \"certified-operators-24lx5\" (UID: \"0f46e40e-ddf3-4dc8-965f-403586d1dc5c\") " pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:20 crc kubenswrapper[4784]: I0123 07:06:20.093485 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-catalog-content\") pod \"certified-operators-24lx5\" (UID: \"0f46e40e-ddf3-4dc8-965f-403586d1dc5c\") " pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:20 crc kubenswrapper[4784]: I0123 07:06:20.093507 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-utilities\") pod \"certified-operators-24lx5\" (UID: \"0f46e40e-ddf3-4dc8-965f-403586d1dc5c\") " pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:20 crc kubenswrapper[4784]: I0123 07:06:20.118074 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfzbp\" (UniqueName: \"kubernetes.io/projected/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-kube-api-access-kfzbp\") pod \"certified-operators-24lx5\" (UID: \"0f46e40e-ddf3-4dc8-965f-403586d1dc5c\") " pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:20 crc kubenswrapper[4784]: I0123 07:06:20.267518 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:20 crc kubenswrapper[4784]: I0123 07:06:20.648720 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-24lx5"] Jan 23 07:06:20 crc kubenswrapper[4784]: I0123 07:06:20.817487 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24lx5" event={"ID":"0f46e40e-ddf3-4dc8-965f-403586d1dc5c","Type":"ContainerStarted","Data":"beb0d53d0c462e8c53f2998df3756ea318b5c517151cdef3f3686139348c4262"} Jan 23 07:06:21 crc kubenswrapper[4784]: I0123 07:06:21.837319 4784 generic.go:334] "Generic (PLEG): container finished" podID="0f46e40e-ddf3-4dc8-965f-403586d1dc5c" containerID="98ef782d45ff8729e3f45fd601859faad7eff45afc934b1fdb551e88fb053b94" exitCode=0 Jan 23 07:06:21 crc kubenswrapper[4784]: I0123 07:06:21.837665 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24lx5" event={"ID":"0f46e40e-ddf3-4dc8-965f-403586d1dc5c","Type":"ContainerDied","Data":"98ef782d45ff8729e3f45fd601859faad7eff45afc934b1fdb551e88fb053b94"} Jan 23 07:06:22 crc kubenswrapper[4784]: I0123 07:06:22.853164 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24lx5" event={"ID":"0f46e40e-ddf3-4dc8-965f-403586d1dc5c","Type":"ContainerStarted","Data":"bae9bc2a9b400c8dc21b337c3ca034a7974a38241041e5d9455e99dc8087f6fc"} Jan 23 07:06:23 crc kubenswrapper[4784]: I0123 07:06:23.603697 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:06:23 crc kubenswrapper[4784]: I0123 07:06:23.603793 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:06:24 crc kubenswrapper[4784]: I0123 07:06:24.882783 4784 generic.go:334] "Generic (PLEG): container finished" podID="0f46e40e-ddf3-4dc8-965f-403586d1dc5c" containerID="bae9bc2a9b400c8dc21b337c3ca034a7974a38241041e5d9455e99dc8087f6fc" exitCode=0 Jan 23 07:06:24 crc kubenswrapper[4784]: I0123 07:06:24.882860 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24lx5" event={"ID":"0f46e40e-ddf3-4dc8-965f-403586d1dc5c","Type":"ContainerDied","Data":"bae9bc2a9b400c8dc21b337c3ca034a7974a38241041e5d9455e99dc8087f6fc"} Jan 23 07:06:25 crc kubenswrapper[4784]: I0123 07:06:25.901322 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24lx5" event={"ID":"0f46e40e-ddf3-4dc8-965f-403586d1dc5c","Type":"ContainerStarted","Data":"9cad2c4b7f1765bacbaa6eccd345e3089cab84722706c9001a532f3d579b3c94"} Jan 23 07:06:25 crc kubenswrapper[4784]: I0123 07:06:25.929958 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-24lx5" podStartSLOduration=3.4539018009999998 podStartE2EDuration="6.929930514s" podCreationTimestamp="2026-01-23 07:06:19 +0000 UTC" firstStartedPulling="2026-01-23 07:06:21.850169308 +0000 UTC m=+2785.082677282" lastFinishedPulling="2026-01-23 07:06:25.326198021 +0000 UTC m=+2788.558705995" observedRunningTime="2026-01-23 07:06:25.927714319 +0000 UTC m=+2789.160222293" watchObservedRunningTime="2026-01-23 07:06:25.929930514 +0000 UTC m=+2789.162438518" Jan 23 07:06:30 crc kubenswrapper[4784]: I0123 07:06:30.268584 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:30 crc kubenswrapper[4784]: I0123 07:06:30.269254 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:30 crc kubenswrapper[4784]: I0123 07:06:30.322714 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:31 crc kubenswrapper[4784]: I0123 07:06:31.016562 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:31 crc kubenswrapper[4784]: I0123 07:06:31.087843 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-24lx5"] Jan 23 07:06:32 crc kubenswrapper[4784]: I0123 07:06:32.984475 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-24lx5" podUID="0f46e40e-ddf3-4dc8-965f-403586d1dc5c" containerName="registry-server" containerID="cri-o://9cad2c4b7f1765bacbaa6eccd345e3089cab84722706c9001a532f3d579b3c94" gracePeriod=2 Jan 23 07:06:33 crc kubenswrapper[4784]: I0123 07:06:33.706263 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:33 crc kubenswrapper[4784]: I0123 07:06:33.829610 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-catalog-content\") pod \"0f46e40e-ddf3-4dc8-965f-403586d1dc5c\" (UID: \"0f46e40e-ddf3-4dc8-965f-403586d1dc5c\") " Jan 23 07:06:33 crc kubenswrapper[4784]: I0123 07:06:33.829804 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfzbp\" (UniqueName: \"kubernetes.io/projected/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-kube-api-access-kfzbp\") pod \"0f46e40e-ddf3-4dc8-965f-403586d1dc5c\" (UID: \"0f46e40e-ddf3-4dc8-965f-403586d1dc5c\") " Jan 23 07:06:33 crc kubenswrapper[4784]: I0123 07:06:33.829981 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-utilities\") pod \"0f46e40e-ddf3-4dc8-965f-403586d1dc5c\" (UID: \"0f46e40e-ddf3-4dc8-965f-403586d1dc5c\") " Jan 23 07:06:33 crc kubenswrapper[4784]: I0123 07:06:33.832314 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-utilities" (OuterVolumeSpecName: "utilities") pod "0f46e40e-ddf3-4dc8-965f-403586d1dc5c" (UID: "0f46e40e-ddf3-4dc8-965f-403586d1dc5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:06:33 crc kubenswrapper[4784]: I0123 07:06:33.836729 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-kube-api-access-kfzbp" (OuterVolumeSpecName: "kube-api-access-kfzbp") pod "0f46e40e-ddf3-4dc8-965f-403586d1dc5c" (UID: "0f46e40e-ddf3-4dc8-965f-403586d1dc5c"). InnerVolumeSpecName "kube-api-access-kfzbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:06:33 crc kubenswrapper[4784]: I0123 07:06:33.884085 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f46e40e-ddf3-4dc8-965f-403586d1dc5c" (UID: "0f46e40e-ddf3-4dc8-965f-403586d1dc5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:06:33 crc kubenswrapper[4784]: I0123 07:06:33.932516 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:06:33 crc kubenswrapper[4784]: I0123 07:06:33.932765 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:06:33 crc kubenswrapper[4784]: I0123 07:06:33.932848 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfzbp\" (UniqueName: \"kubernetes.io/projected/0f46e40e-ddf3-4dc8-965f-403586d1dc5c-kube-api-access-kfzbp\") on node \"crc\" DevicePath \"\"" Jan 23 07:06:33 crc kubenswrapper[4784]: I0123 07:06:33.997653 4784 generic.go:334] "Generic (PLEG): container finished" podID="0f46e40e-ddf3-4dc8-965f-403586d1dc5c" containerID="9cad2c4b7f1765bacbaa6eccd345e3089cab84722706c9001a532f3d579b3c94" exitCode=0 Jan 23 07:06:33 crc kubenswrapper[4784]: I0123 07:06:33.997716 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24lx5" event={"ID":"0f46e40e-ddf3-4dc8-965f-403586d1dc5c","Type":"ContainerDied","Data":"9cad2c4b7f1765bacbaa6eccd345e3089cab84722706c9001a532f3d579b3c94"} Jan 23 07:06:33 crc kubenswrapper[4784]: I0123 07:06:33.997786 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24lx5" event={"ID":"0f46e40e-ddf3-4dc8-965f-403586d1dc5c","Type":"ContainerDied","Data":"beb0d53d0c462e8c53f2998df3756ea318b5c517151cdef3f3686139348c4262"} Jan 23 07:06:33 crc kubenswrapper[4784]: I0123 07:06:33.997817 4784 scope.go:117] "RemoveContainer" containerID="9cad2c4b7f1765bacbaa6eccd345e3089cab84722706c9001a532f3d579b3c94" Jan 23 07:06:33 crc kubenswrapper[4784]: I0123 07:06:33.998009 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24lx5" Jan 23 07:06:34 crc kubenswrapper[4784]: I0123 07:06:34.022452 4784 scope.go:117] "RemoveContainer" containerID="bae9bc2a9b400c8dc21b337c3ca034a7974a38241041e5d9455e99dc8087f6fc" Jan 23 07:06:34 crc kubenswrapper[4784]: I0123 07:06:34.090448 4784 scope.go:117] "RemoveContainer" containerID="98ef782d45ff8729e3f45fd601859faad7eff45afc934b1fdb551e88fb053b94" Jan 23 07:06:34 crc kubenswrapper[4784]: I0123 07:06:34.099875 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-24lx5"] Jan 23 07:06:34 crc kubenswrapper[4784]: I0123 07:06:34.125719 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-24lx5"] Jan 23 07:06:34 crc kubenswrapper[4784]: I0123 07:06:34.130660 4784 scope.go:117] "RemoveContainer" containerID="9cad2c4b7f1765bacbaa6eccd345e3089cab84722706c9001a532f3d579b3c94" Jan 23 07:06:34 crc kubenswrapper[4784]: E0123 07:06:34.131941 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cad2c4b7f1765bacbaa6eccd345e3089cab84722706c9001a532f3d579b3c94\": container with ID starting with 9cad2c4b7f1765bacbaa6eccd345e3089cab84722706c9001a532f3d579b3c94 not found: ID does not exist" containerID="9cad2c4b7f1765bacbaa6eccd345e3089cab84722706c9001a532f3d579b3c94" Jan 23 07:06:34 crc kubenswrapper[4784]: I0123 07:06:34.131995 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cad2c4b7f1765bacbaa6eccd345e3089cab84722706c9001a532f3d579b3c94"} err="failed to get container status \"9cad2c4b7f1765bacbaa6eccd345e3089cab84722706c9001a532f3d579b3c94\": rpc error: code = NotFound desc = could not find container \"9cad2c4b7f1765bacbaa6eccd345e3089cab84722706c9001a532f3d579b3c94\": container with ID starting with 9cad2c4b7f1765bacbaa6eccd345e3089cab84722706c9001a532f3d579b3c94 not found: ID does not exist" Jan 23 07:06:34 crc kubenswrapper[4784]: I0123 07:06:34.132034 4784 scope.go:117] "RemoveContainer" containerID="bae9bc2a9b400c8dc21b337c3ca034a7974a38241041e5d9455e99dc8087f6fc" Jan 23 07:06:34 crc kubenswrapper[4784]: E0123 07:06:34.132427 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bae9bc2a9b400c8dc21b337c3ca034a7974a38241041e5d9455e99dc8087f6fc\": container with ID starting with bae9bc2a9b400c8dc21b337c3ca034a7974a38241041e5d9455e99dc8087f6fc not found: ID does not exist" containerID="bae9bc2a9b400c8dc21b337c3ca034a7974a38241041e5d9455e99dc8087f6fc" Jan 23 07:06:34 crc kubenswrapper[4784]: I0123 07:06:34.132493 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae9bc2a9b400c8dc21b337c3ca034a7974a38241041e5d9455e99dc8087f6fc"} err="failed to get container status \"bae9bc2a9b400c8dc21b337c3ca034a7974a38241041e5d9455e99dc8087f6fc\": rpc error: code = NotFound desc = could not find container \"bae9bc2a9b400c8dc21b337c3ca034a7974a38241041e5d9455e99dc8087f6fc\": container with ID starting with bae9bc2a9b400c8dc21b337c3ca034a7974a38241041e5d9455e99dc8087f6fc not found: ID does not exist" Jan 23 07:06:34 crc kubenswrapper[4784]: I0123 07:06:34.132508 4784 scope.go:117] "RemoveContainer" containerID="98ef782d45ff8729e3f45fd601859faad7eff45afc934b1fdb551e88fb053b94" Jan 23 07:06:34 crc kubenswrapper[4784]: E0123 07:06:34.132953 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98ef782d45ff8729e3f45fd601859faad7eff45afc934b1fdb551e88fb053b94\": container with ID starting with 98ef782d45ff8729e3f45fd601859faad7eff45afc934b1fdb551e88fb053b94 not found: ID does not exist" containerID="98ef782d45ff8729e3f45fd601859faad7eff45afc934b1fdb551e88fb053b94" Jan 23 07:06:34 crc kubenswrapper[4784]: I0123 07:06:34.133112 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ef782d45ff8729e3f45fd601859faad7eff45afc934b1fdb551e88fb053b94"} err="failed to get container status \"98ef782d45ff8729e3f45fd601859faad7eff45afc934b1fdb551e88fb053b94\": rpc error: code = NotFound desc = could not find container \"98ef782d45ff8729e3f45fd601859faad7eff45afc934b1fdb551e88fb053b94\": container with ID starting with 98ef782d45ff8729e3f45fd601859faad7eff45afc934b1fdb551e88fb053b94 not found: ID does not exist" Jan 23 07:06:35 crc kubenswrapper[4784]: I0123 07:06:35.266343 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f46e40e-ddf3-4dc8-965f-403586d1dc5c" path="/var/lib/kubelet/pods/0f46e40e-ddf3-4dc8-965f-403586d1dc5c/volumes" Jan 23 07:06:53 crc kubenswrapper[4784]: I0123 07:06:53.603093 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:06:53 crc kubenswrapper[4784]: I0123 07:06:53.604172 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:07:23 crc kubenswrapper[4784]: I0123 07:07:23.604038 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:07:23 crc kubenswrapper[4784]: I0123 07:07:23.604910 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:07:23 crc kubenswrapper[4784]: I0123 07:07:23.604976 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 07:07:23 crc kubenswrapper[4784]: I0123 07:07:23.605953 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d4a9266cdf4255989d2905856783a386bc690d122ea96239ac56dbeae043011"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:07:23 crc kubenswrapper[4784]: I0123 07:07:23.606017 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://0d4a9266cdf4255989d2905856783a386bc690d122ea96239ac56dbeae043011" gracePeriod=600 Jan 23 07:07:24 crc kubenswrapper[4784]: I0123 07:07:24.650617 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="0d4a9266cdf4255989d2905856783a386bc690d122ea96239ac56dbeae043011" exitCode=0 Jan 23 07:07:24 crc kubenswrapper[4784]: I0123 07:07:24.650741 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"0d4a9266cdf4255989d2905856783a386bc690d122ea96239ac56dbeae043011"} Jan 23 07:07:24 crc kubenswrapper[4784]: I0123 07:07:24.651700 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83"} Jan 23 07:07:24 crc kubenswrapper[4784]: I0123 07:07:24.651930 4784 scope.go:117] "RemoveContainer" containerID="9a7460ac213ffcbbae82208ca12e00021cadb3a726ee2b02bcac07bff1bc2cf3" Jan 23 07:07:27 crc kubenswrapper[4784]: I0123 07:07:27.952769 4784 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sgngm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:07:27 crc kubenswrapper[4784]: I0123 07:07:27.953647 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" podUID="277df242-6850-47b2-af69-2e33cd07657b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:07:27 crc kubenswrapper[4784]: I0123 07:07:27.972027 4784 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sgngm container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:07:27 crc kubenswrapper[4784]: I0123 07:07:27.972590 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sgngm" podUID="277df242-6850-47b2-af69-2e33cd07657b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:08:30 crc kubenswrapper[4784]: I0123 07:08:30.527006 4784 generic.go:334] "Generic (PLEG): container finished" podID="434b967e-b70f-4fae-9cec-5c7f6b78c5d2" containerID="bf8168c6b5f93bf6183d5e0f0c5d5dfe2fd73201163ade138c272070c20ff651" exitCode=0 Jan 23 07:08:30 crc kubenswrapper[4784]: I0123 07:08:30.527124 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" event={"ID":"434b967e-b70f-4fae-9cec-5c7f6b78c5d2","Type":"ContainerDied","Data":"bf8168c6b5f93bf6183d5e0f0c5d5dfe2fd73201163ade138c272070c20ff651"} Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.003279 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.117338 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-combined-ca-bundle\") pod \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.117553 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-migration-ssh-key-1\") pod \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.117591 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-extra-config-0\") pod \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.117736 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-migration-ssh-key-0\") pod \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.117807 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78c26\" (UniqueName: \"kubernetes.io/projected/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-kube-api-access-78c26\") pod \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.117841 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-inventory\") pod \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.117957 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-cell1-compute-config-0\") pod \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.118062 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-cell1-compute-config-1\") pod \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.118104 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-ssh-key-openstack-edpm-ipam\") pod \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\" (UID: \"434b967e-b70f-4fae-9cec-5c7f6b78c5d2\") " Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.147037 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-kube-api-access-78c26" (OuterVolumeSpecName: "kube-api-access-78c26") pod "434b967e-b70f-4fae-9cec-5c7f6b78c5d2" (UID: "434b967e-b70f-4fae-9cec-5c7f6b78c5d2"). InnerVolumeSpecName "kube-api-access-78c26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.148036 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "434b967e-b70f-4fae-9cec-5c7f6b78c5d2" (UID: "434b967e-b70f-4fae-9cec-5c7f6b78c5d2"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.168559 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "434b967e-b70f-4fae-9cec-5c7f6b78c5d2" (UID: "434b967e-b70f-4fae-9cec-5c7f6b78c5d2"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.169864 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-inventory" (OuterVolumeSpecName: "inventory") pod "434b967e-b70f-4fae-9cec-5c7f6b78c5d2" (UID: "434b967e-b70f-4fae-9cec-5c7f6b78c5d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.171981 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "434b967e-b70f-4fae-9cec-5c7f6b78c5d2" (UID: "434b967e-b70f-4fae-9cec-5c7f6b78c5d2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.172441 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "434b967e-b70f-4fae-9cec-5c7f6b78c5d2" (UID: "434b967e-b70f-4fae-9cec-5c7f6b78c5d2"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.174870 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "434b967e-b70f-4fae-9cec-5c7f6b78c5d2" (UID: "434b967e-b70f-4fae-9cec-5c7f6b78c5d2"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.176049 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "434b967e-b70f-4fae-9cec-5c7f6b78c5d2" (UID: "434b967e-b70f-4fae-9cec-5c7f6b78c5d2"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.180418 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "434b967e-b70f-4fae-9cec-5c7f6b78c5d2" (UID: "434b967e-b70f-4fae-9cec-5c7f6b78c5d2"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.222432 4784 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.222477 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.222493 4784 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.222510 4784 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.222524 4784 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.222536 4784 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.222548 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78c26\" (UniqueName: \"kubernetes.io/projected/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-kube-api-access-78c26\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.222563 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.222580 4784 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/434b967e-b70f-4fae-9cec-5c7f6b78c5d2-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.552251 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" event={"ID":"434b967e-b70f-4fae-9cec-5c7f6b78c5d2","Type":"ContainerDied","Data":"7a0186c6ac036c92b4196e0192607cbe78b3232449d0ba2679e0c0590fd86675"} Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.552318 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a0186c6ac036c92b4196e0192607cbe78b3232449d0ba2679e0c0590fd86675" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.552410 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-v8n5s" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.725367 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl"] Jan 23 07:08:32 crc kubenswrapper[4784]: E0123 07:08:32.727326 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="434b967e-b70f-4fae-9cec-5c7f6b78c5d2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.727499 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="434b967e-b70f-4fae-9cec-5c7f6b78c5d2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 07:08:32 crc kubenswrapper[4784]: E0123 07:08:32.727678 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f46e40e-ddf3-4dc8-965f-403586d1dc5c" containerName="extract-utilities" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.727798 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f46e40e-ddf3-4dc8-965f-403586d1dc5c" containerName="extract-utilities" Jan 23 07:08:32 crc kubenswrapper[4784]: E0123 07:08:32.727962 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f46e40e-ddf3-4dc8-965f-403586d1dc5c" containerName="registry-server" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.728036 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f46e40e-ddf3-4dc8-965f-403586d1dc5c" containerName="registry-server" Jan 23 07:08:32 crc kubenswrapper[4784]: E0123 07:08:32.728137 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f46e40e-ddf3-4dc8-965f-403586d1dc5c" containerName="extract-content" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.728225 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f46e40e-ddf3-4dc8-965f-403586d1dc5c" containerName="extract-content" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.729348 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f46e40e-ddf3-4dc8-965f-403586d1dc5c" containerName="registry-server" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.729502 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="434b967e-b70f-4fae-9cec-5c7f6b78c5d2" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.731262 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.735385 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-82g2r" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.735613 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.735791 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.735841 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.735954 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.755202 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl"] Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.845262 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.845331 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.845394 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6ckn\" (UniqueName: \"kubernetes.io/projected/39095423-09e7-4099-8256-b1eab02f4707-kube-api-access-b6ckn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.845791 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.845903 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.846003 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.846074 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.948365 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.948491 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.948578 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.948687 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.948782 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.948883 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6ckn\" (UniqueName: \"kubernetes.io/projected/39095423-09e7-4099-8256-b1eab02f4707-kube-api-access-b6ckn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.949010 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.954234 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.955261 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.955329 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.956447 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.956810 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.967311 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:32 crc kubenswrapper[4784]: I0123 07:08:32.972842 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6ckn\" (UniqueName: \"kubernetes.io/projected/39095423-09e7-4099-8256-b1eab02f4707-kube-api-access-b6ckn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:33 crc kubenswrapper[4784]: I0123 07:08:33.094307 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:08:33 crc kubenswrapper[4784]: I0123 07:08:33.485400 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl"] Jan 23 07:08:33 crc kubenswrapper[4784]: I0123 07:08:33.564383 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" event={"ID":"39095423-09e7-4099-8256-b1eab02f4707","Type":"ContainerStarted","Data":"699d62f5551528845680ae7c63f7b69e8641fb2d2e125a0f04143a74fedf592d"} Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.099992 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-slj79"] Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.104767 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.131203 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-slj79"] Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.179371 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/608ebb32-ff23-4d75-909c-b1b121c972ae-catalog-content\") pod \"community-operators-slj79\" (UID: \"608ebb32-ff23-4d75-909c-b1b121c972ae\") " pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.179646 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-896dp\" (UniqueName: \"kubernetes.io/projected/608ebb32-ff23-4d75-909c-b1b121c972ae-kube-api-access-896dp\") pod \"community-operators-slj79\" (UID: \"608ebb32-ff23-4d75-909c-b1b121c972ae\") " pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.179831 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/608ebb32-ff23-4d75-909c-b1b121c972ae-utilities\") pod \"community-operators-slj79\" (UID: \"608ebb32-ff23-4d75-909c-b1b121c972ae\") " pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.283347 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/608ebb32-ff23-4d75-909c-b1b121c972ae-utilities\") pod \"community-operators-slj79\" (UID: \"608ebb32-ff23-4d75-909c-b1b121c972ae\") " pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.284179 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/608ebb32-ff23-4d75-909c-b1b121c972ae-catalog-content\") pod \"community-operators-slj79\" (UID: \"608ebb32-ff23-4d75-909c-b1b121c972ae\") " pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.284250 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-896dp\" (UniqueName: \"kubernetes.io/projected/608ebb32-ff23-4d75-909c-b1b121c972ae-kube-api-access-896dp\") pod \"community-operators-slj79\" (UID: \"608ebb32-ff23-4d75-909c-b1b121c972ae\") " pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.284249 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/608ebb32-ff23-4d75-909c-b1b121c972ae-utilities\") pod \"community-operators-slj79\" (UID: \"608ebb32-ff23-4d75-909c-b1b121c972ae\") " pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.284888 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/608ebb32-ff23-4d75-909c-b1b121c972ae-catalog-content\") pod \"community-operators-slj79\" (UID: \"608ebb32-ff23-4d75-909c-b1b121c972ae\") " pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.310506 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-896dp\" (UniqueName: \"kubernetes.io/projected/608ebb32-ff23-4d75-909c-b1b121c972ae-kube-api-access-896dp\") pod \"community-operators-slj79\" (UID: \"608ebb32-ff23-4d75-909c-b1b121c972ae\") " pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.496517 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.583850 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" event={"ID":"39095423-09e7-4099-8256-b1eab02f4707","Type":"ContainerStarted","Data":"9ddd789a7e41b86be612087fea8e9845dea599ffec7b0829b2b7164989204a2c"} Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.624330 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" podStartSLOduration=2.071942385 podStartE2EDuration="2.62429069s" podCreationTimestamp="2026-01-23 07:08:32 +0000 UTC" firstStartedPulling="2026-01-23 07:08:33.493656276 +0000 UTC m=+2916.726164250" lastFinishedPulling="2026-01-23 07:08:34.046004581 +0000 UTC m=+2917.278512555" observedRunningTime="2026-01-23 07:08:34.614288383 +0000 UTC m=+2917.846796357" watchObservedRunningTime="2026-01-23 07:08:34.62429069 +0000 UTC m=+2917.856798664" Jan 23 07:08:34 crc kubenswrapper[4784]: I0123 07:08:34.867311 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-slj79"] Jan 23 07:08:34 crc kubenswrapper[4784]: W0123 07:08:34.873794 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod608ebb32_ff23_4d75_909c_b1b121c972ae.slice/crio-e7c2ef37756e835329f78231b6b178dcc33aed83c46922a3ba13dbe6f0847570 WatchSource:0}: Error finding container e7c2ef37756e835329f78231b6b178dcc33aed83c46922a3ba13dbe6f0847570: Status 404 returned error can't find the container with id e7c2ef37756e835329f78231b6b178dcc33aed83c46922a3ba13dbe6f0847570 Jan 23 07:08:35 crc kubenswrapper[4784]: I0123 07:08:35.620221 4784 generic.go:334] "Generic (PLEG): container finished" podID="608ebb32-ff23-4d75-909c-b1b121c972ae" containerID="2c8bd9f14a307867c301ae2fd817102fb5293fdb9797801048b5be8e8401b269" exitCode=0 Jan 23 07:08:35 crc kubenswrapper[4784]: I0123 07:08:35.622537 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slj79" event={"ID":"608ebb32-ff23-4d75-909c-b1b121c972ae","Type":"ContainerDied","Data":"2c8bd9f14a307867c301ae2fd817102fb5293fdb9797801048b5be8e8401b269"} Jan 23 07:08:35 crc kubenswrapper[4784]: I0123 07:08:35.622576 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slj79" event={"ID":"608ebb32-ff23-4d75-909c-b1b121c972ae","Type":"ContainerStarted","Data":"e7c2ef37756e835329f78231b6b178dcc33aed83c46922a3ba13dbe6f0847570"} Jan 23 07:08:37 crc kubenswrapper[4784]: I0123 07:08:37.648628 4784 generic.go:334] "Generic (PLEG): container finished" podID="608ebb32-ff23-4d75-909c-b1b121c972ae" containerID="24ba5be4bcb39bc3d3a1277224d90c7bf6c9cc59197f33e191741cfd6967961f" exitCode=0 Jan 23 07:08:37 crc kubenswrapper[4784]: I0123 07:08:37.648731 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slj79" event={"ID":"608ebb32-ff23-4d75-909c-b1b121c972ae","Type":"ContainerDied","Data":"24ba5be4bcb39bc3d3a1277224d90c7bf6c9cc59197f33e191741cfd6967961f"} Jan 23 07:08:38 crc kubenswrapper[4784]: I0123 07:08:38.674059 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slj79" event={"ID":"608ebb32-ff23-4d75-909c-b1b121c972ae","Type":"ContainerStarted","Data":"d5005e1cef2083d1cb42a854edb76130631b52a002317949cc289c75985357a6"} Jan 23 07:08:38 crc kubenswrapper[4784]: I0123 07:08:38.700714 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-slj79" podStartSLOduration=2.100760162 podStartE2EDuration="4.700687673s" podCreationTimestamp="2026-01-23 07:08:34 +0000 UTC" firstStartedPulling="2026-01-23 07:08:35.634841873 +0000 UTC m=+2918.867349857" lastFinishedPulling="2026-01-23 07:08:38.234769394 +0000 UTC m=+2921.467277368" observedRunningTime="2026-01-23 07:08:38.700421866 +0000 UTC m=+2921.932929850" watchObservedRunningTime="2026-01-23 07:08:38.700687673 +0000 UTC m=+2921.933195647" Jan 23 07:08:44 crc kubenswrapper[4784]: I0123 07:08:44.496670 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:44 crc kubenswrapper[4784]: I0123 07:08:44.497666 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:44 crc kubenswrapper[4784]: I0123 07:08:44.552940 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:44 crc kubenswrapper[4784]: I0123 07:08:44.854184 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:44 crc kubenswrapper[4784]: I0123 07:08:44.920362 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-slj79"] Jan 23 07:08:46 crc kubenswrapper[4784]: I0123 07:08:46.782096 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-slj79" podUID="608ebb32-ff23-4d75-909c-b1b121c972ae" containerName="registry-server" containerID="cri-o://d5005e1cef2083d1cb42a854edb76130631b52a002317949cc289c75985357a6" gracePeriod=2 Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.354366 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.497391 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/608ebb32-ff23-4d75-909c-b1b121c972ae-catalog-content\") pod \"608ebb32-ff23-4d75-909c-b1b121c972ae\" (UID: \"608ebb32-ff23-4d75-909c-b1b121c972ae\") " Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.497593 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-896dp\" (UniqueName: \"kubernetes.io/projected/608ebb32-ff23-4d75-909c-b1b121c972ae-kube-api-access-896dp\") pod \"608ebb32-ff23-4d75-909c-b1b121c972ae\" (UID: \"608ebb32-ff23-4d75-909c-b1b121c972ae\") " Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.497887 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/608ebb32-ff23-4d75-909c-b1b121c972ae-utilities\") pod \"608ebb32-ff23-4d75-909c-b1b121c972ae\" (UID: \"608ebb32-ff23-4d75-909c-b1b121c972ae\") " Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.498984 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/608ebb32-ff23-4d75-909c-b1b121c972ae-utilities" (OuterVolumeSpecName: "utilities") pod "608ebb32-ff23-4d75-909c-b1b121c972ae" (UID: "608ebb32-ff23-4d75-909c-b1b121c972ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.513727 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/608ebb32-ff23-4d75-909c-b1b121c972ae-kube-api-access-896dp" (OuterVolumeSpecName: "kube-api-access-896dp") pod "608ebb32-ff23-4d75-909c-b1b121c972ae" (UID: "608ebb32-ff23-4d75-909c-b1b121c972ae"). InnerVolumeSpecName "kube-api-access-896dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.583907 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/608ebb32-ff23-4d75-909c-b1b121c972ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "608ebb32-ff23-4d75-909c-b1b121c972ae" (UID: "608ebb32-ff23-4d75-909c-b1b121c972ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.600953 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/608ebb32-ff23-4d75-909c-b1b121c972ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.601005 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/608ebb32-ff23-4d75-909c-b1b121c972ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.601019 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-896dp\" (UniqueName: \"kubernetes.io/projected/608ebb32-ff23-4d75-909c-b1b121c972ae-kube-api-access-896dp\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.797873 4784 generic.go:334] "Generic (PLEG): container finished" podID="608ebb32-ff23-4d75-909c-b1b121c972ae" containerID="d5005e1cef2083d1cb42a854edb76130631b52a002317949cc289c75985357a6" exitCode=0 Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.797983 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slj79" event={"ID":"608ebb32-ff23-4d75-909c-b1b121c972ae","Type":"ContainerDied","Data":"d5005e1cef2083d1cb42a854edb76130631b52a002317949cc289c75985357a6"} Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.798014 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-slj79" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.798971 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slj79" event={"ID":"608ebb32-ff23-4d75-909c-b1b121c972ae","Type":"ContainerDied","Data":"e7c2ef37756e835329f78231b6b178dcc33aed83c46922a3ba13dbe6f0847570"} Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.799022 4784 scope.go:117] "RemoveContainer" containerID="d5005e1cef2083d1cb42a854edb76130631b52a002317949cc289c75985357a6" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.846543 4784 scope.go:117] "RemoveContainer" containerID="24ba5be4bcb39bc3d3a1277224d90c7bf6c9cc59197f33e191741cfd6967961f" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.855725 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-slj79"] Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.866892 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-slj79"] Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.898289 4784 scope.go:117] "RemoveContainer" containerID="2c8bd9f14a307867c301ae2fd817102fb5293fdb9797801048b5be8e8401b269" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.963593 4784 scope.go:117] "RemoveContainer" containerID="d5005e1cef2083d1cb42a854edb76130631b52a002317949cc289c75985357a6" Jan 23 07:08:47 crc kubenswrapper[4784]: E0123 07:08:47.964262 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5005e1cef2083d1cb42a854edb76130631b52a002317949cc289c75985357a6\": container with ID starting with d5005e1cef2083d1cb42a854edb76130631b52a002317949cc289c75985357a6 not found: ID does not exist" containerID="d5005e1cef2083d1cb42a854edb76130631b52a002317949cc289c75985357a6" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.964320 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5005e1cef2083d1cb42a854edb76130631b52a002317949cc289c75985357a6"} err="failed to get container status \"d5005e1cef2083d1cb42a854edb76130631b52a002317949cc289c75985357a6\": rpc error: code = NotFound desc = could not find container \"d5005e1cef2083d1cb42a854edb76130631b52a002317949cc289c75985357a6\": container with ID starting with d5005e1cef2083d1cb42a854edb76130631b52a002317949cc289c75985357a6 not found: ID does not exist" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.964381 4784 scope.go:117] "RemoveContainer" containerID="24ba5be4bcb39bc3d3a1277224d90c7bf6c9cc59197f33e191741cfd6967961f" Jan 23 07:08:47 crc kubenswrapper[4784]: E0123 07:08:47.964975 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24ba5be4bcb39bc3d3a1277224d90c7bf6c9cc59197f33e191741cfd6967961f\": container with ID starting with 24ba5be4bcb39bc3d3a1277224d90c7bf6c9cc59197f33e191741cfd6967961f not found: ID does not exist" containerID="24ba5be4bcb39bc3d3a1277224d90c7bf6c9cc59197f33e191741cfd6967961f" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.965021 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24ba5be4bcb39bc3d3a1277224d90c7bf6c9cc59197f33e191741cfd6967961f"} err="failed to get container status \"24ba5be4bcb39bc3d3a1277224d90c7bf6c9cc59197f33e191741cfd6967961f\": rpc error: code = NotFound desc = could not find container \"24ba5be4bcb39bc3d3a1277224d90c7bf6c9cc59197f33e191741cfd6967961f\": container with ID starting with 24ba5be4bcb39bc3d3a1277224d90c7bf6c9cc59197f33e191741cfd6967961f not found: ID does not exist" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.965057 4784 scope.go:117] "RemoveContainer" containerID="2c8bd9f14a307867c301ae2fd817102fb5293fdb9797801048b5be8e8401b269" Jan 23 07:08:47 crc kubenswrapper[4784]: E0123 07:08:47.965408 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c8bd9f14a307867c301ae2fd817102fb5293fdb9797801048b5be8e8401b269\": container with ID starting with 2c8bd9f14a307867c301ae2fd817102fb5293fdb9797801048b5be8e8401b269 not found: ID does not exist" containerID="2c8bd9f14a307867c301ae2fd817102fb5293fdb9797801048b5be8e8401b269" Jan 23 07:08:47 crc kubenswrapper[4784]: I0123 07:08:47.965461 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c8bd9f14a307867c301ae2fd817102fb5293fdb9797801048b5be8e8401b269"} err="failed to get container status \"2c8bd9f14a307867c301ae2fd817102fb5293fdb9797801048b5be8e8401b269\": rpc error: code = NotFound desc = could not find container \"2c8bd9f14a307867c301ae2fd817102fb5293fdb9797801048b5be8e8401b269\": container with ID starting with 2c8bd9f14a307867c301ae2fd817102fb5293fdb9797801048b5be8e8401b269 not found: ID does not exist" Jan 23 07:08:49 crc kubenswrapper[4784]: I0123 07:08:49.274889 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="608ebb32-ff23-4d75-909c-b1b121c972ae" path="/var/lib/kubelet/pods/608ebb32-ff23-4d75-909c-b1b121c972ae/volumes" Jan 23 07:09:53 crc kubenswrapper[4784]: I0123 07:09:53.602930 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:09:53 crc kubenswrapper[4784]: I0123 07:09:53.603778 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.122480 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lvx8r"] Jan 23 07:10:17 crc kubenswrapper[4784]: E0123 07:10:17.123970 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608ebb32-ff23-4d75-909c-b1b121c972ae" containerName="extract-utilities" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.124019 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="608ebb32-ff23-4d75-909c-b1b121c972ae" containerName="extract-utilities" Jan 23 07:10:17 crc kubenswrapper[4784]: E0123 07:10:17.124097 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608ebb32-ff23-4d75-909c-b1b121c972ae" containerName="registry-server" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.124116 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="608ebb32-ff23-4d75-909c-b1b121c972ae" containerName="registry-server" Jan 23 07:10:17 crc kubenswrapper[4784]: E0123 07:10:17.124170 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608ebb32-ff23-4d75-909c-b1b121c972ae" containerName="extract-content" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.124186 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="608ebb32-ff23-4d75-909c-b1b121c972ae" containerName="extract-content" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.124905 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="608ebb32-ff23-4d75-909c-b1b121c972ae" containerName="registry-server" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.128160 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.152345 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lvx8r"] Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.173813 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7256997d-1626-42df-8c41-0487e94eefae-catalog-content\") pod \"redhat-operators-lvx8r\" (UID: \"7256997d-1626-42df-8c41-0487e94eefae\") " pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.174003 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7256997d-1626-42df-8c41-0487e94eefae-utilities\") pod \"redhat-operators-lvx8r\" (UID: \"7256997d-1626-42df-8c41-0487e94eefae\") " pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.174228 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82smc\" (UniqueName: \"kubernetes.io/projected/7256997d-1626-42df-8c41-0487e94eefae-kube-api-access-82smc\") pod \"redhat-operators-lvx8r\" (UID: \"7256997d-1626-42df-8c41-0487e94eefae\") " pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.280420 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7256997d-1626-42df-8c41-0487e94eefae-catalog-content\") pod \"redhat-operators-lvx8r\" (UID: \"7256997d-1626-42df-8c41-0487e94eefae\") " pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.280533 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7256997d-1626-42df-8c41-0487e94eefae-utilities\") pod \"redhat-operators-lvx8r\" (UID: \"7256997d-1626-42df-8c41-0487e94eefae\") " pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.280600 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82smc\" (UniqueName: \"kubernetes.io/projected/7256997d-1626-42df-8c41-0487e94eefae-kube-api-access-82smc\") pod \"redhat-operators-lvx8r\" (UID: \"7256997d-1626-42df-8c41-0487e94eefae\") " pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.281635 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7256997d-1626-42df-8c41-0487e94eefae-catalog-content\") pod \"redhat-operators-lvx8r\" (UID: \"7256997d-1626-42df-8c41-0487e94eefae\") " pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.286685 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7256997d-1626-42df-8c41-0487e94eefae-utilities\") pod \"redhat-operators-lvx8r\" (UID: \"7256997d-1626-42df-8c41-0487e94eefae\") " pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.306062 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82smc\" (UniqueName: \"kubernetes.io/projected/7256997d-1626-42df-8c41-0487e94eefae-kube-api-access-82smc\") pod \"redhat-operators-lvx8r\" (UID: \"7256997d-1626-42df-8c41-0487e94eefae\") " pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.480714 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:17 crc kubenswrapper[4784]: I0123 07:10:17.988496 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lvx8r"] Jan 23 07:10:18 crc kubenswrapper[4784]: I0123 07:10:18.058061 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvx8r" event={"ID":"7256997d-1626-42df-8c41-0487e94eefae","Type":"ContainerStarted","Data":"1c2dbf401c266e55604c2703a0f5f48a09d1ff9b620d877105422f150a1868ab"} Jan 23 07:10:19 crc kubenswrapper[4784]: I0123 07:10:19.071916 4784 generic.go:334] "Generic (PLEG): container finished" podID="7256997d-1626-42df-8c41-0487e94eefae" containerID="a3ae14515d9ec3848d10651941cb3748c5f20af8536b3ac1ae8a4294d68cf8a3" exitCode=0 Jan 23 07:10:19 crc kubenswrapper[4784]: I0123 07:10:19.071991 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvx8r" event={"ID":"7256997d-1626-42df-8c41-0487e94eefae","Type":"ContainerDied","Data":"a3ae14515d9ec3848d10651941cb3748c5f20af8536b3ac1ae8a4294d68cf8a3"} Jan 23 07:10:20 crc kubenswrapper[4784]: I0123 07:10:20.090425 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvx8r" event={"ID":"7256997d-1626-42df-8c41-0487e94eefae","Type":"ContainerStarted","Data":"3297e2114298893e9a750f120f857068f01877150c92d8d25406a33876776294"} Jan 23 07:10:23 crc kubenswrapper[4784]: I0123 07:10:23.139092 4784 generic.go:334] "Generic (PLEG): container finished" podID="7256997d-1626-42df-8c41-0487e94eefae" containerID="3297e2114298893e9a750f120f857068f01877150c92d8d25406a33876776294" exitCode=0 Jan 23 07:10:23 crc kubenswrapper[4784]: I0123 07:10:23.139202 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvx8r" event={"ID":"7256997d-1626-42df-8c41-0487e94eefae","Type":"ContainerDied","Data":"3297e2114298893e9a750f120f857068f01877150c92d8d25406a33876776294"} Jan 23 07:10:23 crc kubenswrapper[4784]: I0123 07:10:23.603080 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:10:23 crc kubenswrapper[4784]: I0123 07:10:23.603790 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:10:25 crc kubenswrapper[4784]: I0123 07:10:25.172938 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvx8r" event={"ID":"7256997d-1626-42df-8c41-0487e94eefae","Type":"ContainerStarted","Data":"622566af4c7d5cd5a65e0987cffeadba182308278daaa073f9d00cd74193dceb"} Jan 23 07:10:25 crc kubenswrapper[4784]: I0123 07:10:25.214402 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lvx8r" podStartSLOduration=3.482528063 podStartE2EDuration="8.214351166s" podCreationTimestamp="2026-01-23 07:10:17 +0000 UTC" firstStartedPulling="2026-01-23 07:10:19.075511367 +0000 UTC m=+3022.308019381" lastFinishedPulling="2026-01-23 07:10:23.80733451 +0000 UTC m=+3027.039842484" observedRunningTime="2026-01-23 07:10:25.212398027 +0000 UTC m=+3028.444906011" watchObservedRunningTime="2026-01-23 07:10:25.214351166 +0000 UTC m=+3028.446859180" Jan 23 07:10:27 crc kubenswrapper[4784]: I0123 07:10:27.481792 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:27 crc kubenswrapper[4784]: I0123 07:10:27.482557 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:28 crc kubenswrapper[4784]: I0123 07:10:28.557572 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lvx8r" podUID="7256997d-1626-42df-8c41-0487e94eefae" containerName="registry-server" probeResult="failure" output=< Jan 23 07:10:28 crc kubenswrapper[4784]: timeout: failed to connect service ":50051" within 1s Jan 23 07:10:28 crc kubenswrapper[4784]: > Jan 23 07:10:37 crc kubenswrapper[4784]: I0123 07:10:37.547888 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:37 crc kubenswrapper[4784]: I0123 07:10:37.641277 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:37 crc kubenswrapper[4784]: I0123 07:10:37.793205 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lvx8r"] Jan 23 07:10:39 crc kubenswrapper[4784]: I0123 07:10:39.372886 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lvx8r" podUID="7256997d-1626-42df-8c41-0487e94eefae" containerName="registry-server" containerID="cri-o://622566af4c7d5cd5a65e0987cffeadba182308278daaa073f9d00cd74193dceb" gracePeriod=2 Jan 23 07:10:39 crc kubenswrapper[4784]: I0123 07:10:39.916570 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.034049 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7256997d-1626-42df-8c41-0487e94eefae-utilities\") pod \"7256997d-1626-42df-8c41-0487e94eefae\" (UID: \"7256997d-1626-42df-8c41-0487e94eefae\") " Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.034549 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82smc\" (UniqueName: \"kubernetes.io/projected/7256997d-1626-42df-8c41-0487e94eefae-kube-api-access-82smc\") pod \"7256997d-1626-42df-8c41-0487e94eefae\" (UID: \"7256997d-1626-42df-8c41-0487e94eefae\") " Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.034616 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7256997d-1626-42df-8c41-0487e94eefae-catalog-content\") pod \"7256997d-1626-42df-8c41-0487e94eefae\" (UID: \"7256997d-1626-42df-8c41-0487e94eefae\") " Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.035267 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7256997d-1626-42df-8c41-0487e94eefae-utilities" (OuterVolumeSpecName: "utilities") pod "7256997d-1626-42df-8c41-0487e94eefae" (UID: "7256997d-1626-42df-8c41-0487e94eefae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.040643 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7256997d-1626-42df-8c41-0487e94eefae-kube-api-access-82smc" (OuterVolumeSpecName: "kube-api-access-82smc") pod "7256997d-1626-42df-8c41-0487e94eefae" (UID: "7256997d-1626-42df-8c41-0487e94eefae"). InnerVolumeSpecName "kube-api-access-82smc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.137362 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82smc\" (UniqueName: \"kubernetes.io/projected/7256997d-1626-42df-8c41-0487e94eefae-kube-api-access-82smc\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.137403 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7256997d-1626-42df-8c41-0487e94eefae-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.162689 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7256997d-1626-42df-8c41-0487e94eefae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7256997d-1626-42df-8c41-0487e94eefae" (UID: "7256997d-1626-42df-8c41-0487e94eefae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.240173 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7256997d-1626-42df-8c41-0487e94eefae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.387194 4784 generic.go:334] "Generic (PLEG): container finished" podID="7256997d-1626-42df-8c41-0487e94eefae" containerID="622566af4c7d5cd5a65e0987cffeadba182308278daaa073f9d00cd74193dceb" exitCode=0 Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.387250 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvx8r" event={"ID":"7256997d-1626-42df-8c41-0487e94eefae","Type":"ContainerDied","Data":"622566af4c7d5cd5a65e0987cffeadba182308278daaa073f9d00cd74193dceb"} Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.387291 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvx8r" event={"ID":"7256997d-1626-42df-8c41-0487e94eefae","Type":"ContainerDied","Data":"1c2dbf401c266e55604c2703a0f5f48a09d1ff9b620d877105422f150a1868ab"} Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.387311 4784 scope.go:117] "RemoveContainer" containerID="622566af4c7d5cd5a65e0987cffeadba182308278daaa073f9d00cd74193dceb" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.387362 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvx8r" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.443062 4784 scope.go:117] "RemoveContainer" containerID="3297e2114298893e9a750f120f857068f01877150c92d8d25406a33876776294" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.451697 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lvx8r"] Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.469083 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lvx8r"] Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.493456 4784 scope.go:117] "RemoveContainer" containerID="a3ae14515d9ec3848d10651941cb3748c5f20af8536b3ac1ae8a4294d68cf8a3" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.538913 4784 scope.go:117] "RemoveContainer" containerID="622566af4c7d5cd5a65e0987cffeadba182308278daaa073f9d00cd74193dceb" Jan 23 07:10:40 crc kubenswrapper[4784]: E0123 07:10:40.539626 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"622566af4c7d5cd5a65e0987cffeadba182308278daaa073f9d00cd74193dceb\": container with ID starting with 622566af4c7d5cd5a65e0987cffeadba182308278daaa073f9d00cd74193dceb not found: ID does not exist" containerID="622566af4c7d5cd5a65e0987cffeadba182308278daaa073f9d00cd74193dceb" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.539705 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"622566af4c7d5cd5a65e0987cffeadba182308278daaa073f9d00cd74193dceb"} err="failed to get container status \"622566af4c7d5cd5a65e0987cffeadba182308278daaa073f9d00cd74193dceb\": rpc error: code = NotFound desc = could not find container \"622566af4c7d5cd5a65e0987cffeadba182308278daaa073f9d00cd74193dceb\": container with ID starting with 622566af4c7d5cd5a65e0987cffeadba182308278daaa073f9d00cd74193dceb not found: ID does not exist" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.539768 4784 scope.go:117] "RemoveContainer" containerID="3297e2114298893e9a750f120f857068f01877150c92d8d25406a33876776294" Jan 23 07:10:40 crc kubenswrapper[4784]: E0123 07:10:40.540307 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3297e2114298893e9a750f120f857068f01877150c92d8d25406a33876776294\": container with ID starting with 3297e2114298893e9a750f120f857068f01877150c92d8d25406a33876776294 not found: ID does not exist" containerID="3297e2114298893e9a750f120f857068f01877150c92d8d25406a33876776294" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.540353 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3297e2114298893e9a750f120f857068f01877150c92d8d25406a33876776294"} err="failed to get container status \"3297e2114298893e9a750f120f857068f01877150c92d8d25406a33876776294\": rpc error: code = NotFound desc = could not find container \"3297e2114298893e9a750f120f857068f01877150c92d8d25406a33876776294\": container with ID starting with 3297e2114298893e9a750f120f857068f01877150c92d8d25406a33876776294 not found: ID does not exist" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.540384 4784 scope.go:117] "RemoveContainer" containerID="a3ae14515d9ec3848d10651941cb3748c5f20af8536b3ac1ae8a4294d68cf8a3" Jan 23 07:10:40 crc kubenswrapper[4784]: E0123 07:10:40.540735 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3ae14515d9ec3848d10651941cb3748c5f20af8536b3ac1ae8a4294d68cf8a3\": container with ID starting with a3ae14515d9ec3848d10651941cb3748c5f20af8536b3ac1ae8a4294d68cf8a3 not found: ID does not exist" containerID="a3ae14515d9ec3848d10651941cb3748c5f20af8536b3ac1ae8a4294d68cf8a3" Jan 23 07:10:40 crc kubenswrapper[4784]: I0123 07:10:40.540788 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3ae14515d9ec3848d10651941cb3748c5f20af8536b3ac1ae8a4294d68cf8a3"} err="failed to get container status \"a3ae14515d9ec3848d10651941cb3748c5f20af8536b3ac1ae8a4294d68cf8a3\": rpc error: code = NotFound desc = could not find container \"a3ae14515d9ec3848d10651941cb3748c5f20af8536b3ac1ae8a4294d68cf8a3\": container with ID starting with a3ae14515d9ec3848d10651941cb3748c5f20af8536b3ac1ae8a4294d68cf8a3 not found: ID does not exist" Jan 23 07:10:41 crc kubenswrapper[4784]: I0123 07:10:41.292979 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7256997d-1626-42df-8c41-0487e94eefae" path="/var/lib/kubelet/pods/7256997d-1626-42df-8c41-0487e94eefae/volumes" Jan 23 07:10:53 crc kubenswrapper[4784]: I0123 07:10:53.603317 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:10:53 crc kubenswrapper[4784]: I0123 07:10:53.604369 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:10:53 crc kubenswrapper[4784]: I0123 07:10:53.604443 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 07:10:53 crc kubenswrapper[4784]: I0123 07:10:53.605719 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:10:53 crc kubenswrapper[4784]: I0123 07:10:53.605835 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" gracePeriod=600 Jan 23 07:10:53 crc kubenswrapper[4784]: E0123 07:10:53.744351 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:10:54 crc kubenswrapper[4784]: I0123 07:10:54.577006 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" exitCode=0 Jan 23 07:10:54 crc kubenswrapper[4784]: I0123 07:10:54.577090 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83"} Jan 23 07:10:54 crc kubenswrapper[4784]: I0123 07:10:54.577180 4784 scope.go:117] "RemoveContainer" containerID="0d4a9266cdf4255989d2905856783a386bc690d122ea96239ac56dbeae043011" Jan 23 07:10:54 crc kubenswrapper[4784]: I0123 07:10:54.578238 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:10:54 crc kubenswrapper[4784]: E0123 07:10:54.578796 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:11:02 crc kubenswrapper[4784]: I0123 07:11:02.696526 4784 generic.go:334] "Generic (PLEG): container finished" podID="39095423-09e7-4099-8256-b1eab02f4707" containerID="9ddd789a7e41b86be612087fea8e9845dea599ffec7b0829b2b7164989204a2c" exitCode=0 Jan 23 07:11:02 crc kubenswrapper[4784]: I0123 07:11:02.696623 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" event={"ID":"39095423-09e7-4099-8256-b1eab02f4707","Type":"ContainerDied","Data":"9ddd789a7e41b86be612087fea8e9845dea599ffec7b0829b2b7164989204a2c"} Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.330976 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.412117 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-telemetry-combined-ca-bundle\") pod \"39095423-09e7-4099-8256-b1eab02f4707\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.412414 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-inventory\") pod \"39095423-09e7-4099-8256-b1eab02f4707\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.412544 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-0\") pod \"39095423-09e7-4099-8256-b1eab02f4707\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.412651 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-2\") pod \"39095423-09e7-4099-8256-b1eab02f4707\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.412747 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-1\") pod \"39095423-09e7-4099-8256-b1eab02f4707\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.412938 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6ckn\" (UniqueName: \"kubernetes.io/projected/39095423-09e7-4099-8256-b1eab02f4707-kube-api-access-b6ckn\") pod \"39095423-09e7-4099-8256-b1eab02f4707\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.413035 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ssh-key-openstack-edpm-ipam\") pod \"39095423-09e7-4099-8256-b1eab02f4707\" (UID: \"39095423-09e7-4099-8256-b1eab02f4707\") " Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.423093 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39095423-09e7-4099-8256-b1eab02f4707-kube-api-access-b6ckn" (OuterVolumeSpecName: "kube-api-access-b6ckn") pod "39095423-09e7-4099-8256-b1eab02f4707" (UID: "39095423-09e7-4099-8256-b1eab02f4707"). InnerVolumeSpecName "kube-api-access-b6ckn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.434776 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "39095423-09e7-4099-8256-b1eab02f4707" (UID: "39095423-09e7-4099-8256-b1eab02f4707"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.450857 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "39095423-09e7-4099-8256-b1eab02f4707" (UID: "39095423-09e7-4099-8256-b1eab02f4707"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.454782 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "39095423-09e7-4099-8256-b1eab02f4707" (UID: "39095423-09e7-4099-8256-b1eab02f4707"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.454970 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "39095423-09e7-4099-8256-b1eab02f4707" (UID: "39095423-09e7-4099-8256-b1eab02f4707"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.461250 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-inventory" (OuterVolumeSpecName: "inventory") pod "39095423-09e7-4099-8256-b1eab02f4707" (UID: "39095423-09e7-4099-8256-b1eab02f4707"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.462868 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "39095423-09e7-4099-8256-b1eab02f4707" (UID: "39095423-09e7-4099-8256-b1eab02f4707"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.516368 4784 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.516426 4784 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.516442 4784 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.516453 4784 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.516464 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6ckn\" (UniqueName: \"kubernetes.io/projected/39095423-09e7-4099-8256-b1eab02f4707-kube-api-access-b6ckn\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.516474 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.516487 4784 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39095423-09e7-4099-8256-b1eab02f4707-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.732242 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" event={"ID":"39095423-09e7-4099-8256-b1eab02f4707","Type":"ContainerDied","Data":"699d62f5551528845680ae7c63f7b69e8641fb2d2e125a0f04143a74fedf592d"} Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.732306 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="699d62f5551528845680ae7c63f7b69e8641fb2d2e125a0f04143a74fedf592d" Jan 23 07:11:04 crc kubenswrapper[4784]: I0123 07:11:04.732425 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl" Jan 23 07:11:06 crc kubenswrapper[4784]: I0123 07:11:06.254415 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:11:06 crc kubenswrapper[4784]: E0123 07:11:06.255182 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:11:18 crc kubenswrapper[4784]: I0123 07:11:18.254157 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:11:18 crc kubenswrapper[4784]: E0123 07:11:18.255504 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:11:30 crc kubenswrapper[4784]: I0123 07:11:30.253935 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:11:30 crc kubenswrapper[4784]: E0123 07:11:30.254935 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:11:44 crc kubenswrapper[4784]: I0123 07:11:44.255462 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:11:44 crc kubenswrapper[4784]: E0123 07:11:44.256980 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:11:45 crc kubenswrapper[4784]: I0123 07:11:45.849378 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 07:11:45 crc kubenswrapper[4784]: I0123 07:11:45.850135 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerName="prometheus" containerID="cri-o://6a28253ac0032048200290f552c55613ace6ed8d277a52da3224b5099aebf6b7" gracePeriod=600 Jan 23 07:11:45 crc kubenswrapper[4784]: I0123 07:11:45.850324 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerName="thanos-sidecar" containerID="cri-o://d0692b3ffba15cc1bbd3a3e2067b83e61e9daa350591185fb3f1858b49370111" gracePeriod=600 Jan 23 07:11:45 crc kubenswrapper[4784]: I0123 07:11:45.850379 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerName="config-reloader" containerID="cri-o://4d908ed3f9dc347b45dda7638e1e76f6fb1e971d09085a04efeab7668f9a0ddd" gracePeriod=600 Jan 23 07:11:46 crc kubenswrapper[4784]: I0123 07:11:46.324202 4784 generic.go:334] "Generic (PLEG): container finished" podID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerID="d0692b3ffba15cc1bbd3a3e2067b83e61e9daa350591185fb3f1858b49370111" exitCode=0 Jan 23 07:11:46 crc kubenswrapper[4784]: I0123 07:11:46.324673 4784 generic.go:334] "Generic (PLEG): container finished" podID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerID="4d908ed3f9dc347b45dda7638e1e76f6fb1e971d09085a04efeab7668f9a0ddd" exitCode=0 Jan 23 07:11:46 crc kubenswrapper[4784]: I0123 07:11:46.324685 4784 generic.go:334] "Generic (PLEG): container finished" podID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerID="6a28253ac0032048200290f552c55613ace6ed8d277a52da3224b5099aebf6b7" exitCode=0 Jan 23 07:11:46 crc kubenswrapper[4784]: I0123 07:11:46.324309 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e974f78-4c17-480b-8a35-285a89f1cb35","Type":"ContainerDied","Data":"d0692b3ffba15cc1bbd3a3e2067b83e61e9daa350591185fb3f1858b49370111"} Jan 23 07:11:46 crc kubenswrapper[4784]: I0123 07:11:46.324758 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e974f78-4c17-480b-8a35-285a89f1cb35","Type":"ContainerDied","Data":"4d908ed3f9dc347b45dda7638e1e76f6fb1e971d09085a04efeab7668f9a0ddd"} Jan 23 07:11:46 crc kubenswrapper[4784]: I0123 07:11:46.324794 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e974f78-4c17-480b-8a35-285a89f1cb35","Type":"ContainerDied","Data":"6a28253ac0032048200290f552c55613ace6ed8d277a52da3224b5099aebf6b7"} Jan 23 07:11:46 crc kubenswrapper[4784]: I0123 07:11:46.904411 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.036154 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config\") pod \"3e974f78-4c17-480b-8a35-285a89f1cb35\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.036235 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-thanos-prometheus-http-client-file\") pod \"3e974f78-4c17-480b-8a35-285a89f1cb35\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.036271 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-0\") pod \"3e974f78-4c17-480b-8a35-285a89f1cb35\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.036634 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") pod \"3e974f78-4c17-480b-8a35-285a89f1cb35\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.036674 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-config\") pod \"3e974f78-4c17-480b-8a35-285a89f1cb35\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.036709 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc6kd\" (UniqueName: \"kubernetes.io/projected/3e974f78-4c17-480b-8a35-285a89f1cb35-kube-api-access-vc6kd\") pod \"3e974f78-4c17-480b-8a35-285a89f1cb35\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.036772 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"3e974f78-4c17-480b-8a35-285a89f1cb35\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.036827 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-1\") pod \"3e974f78-4c17-480b-8a35-285a89f1cb35\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.037822 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "3e974f78-4c17-480b-8a35-285a89f1cb35" (UID: "3e974f78-4c17-480b-8a35-285a89f1cb35"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.037942 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e974f78-4c17-480b-8a35-285a89f1cb35-tls-assets\") pod \"3e974f78-4c17-480b-8a35-285a89f1cb35\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.037974 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"3e974f78-4c17-480b-8a35-285a89f1cb35\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.038050 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e974f78-4c17-480b-8a35-285a89f1cb35-config-out\") pod \"3e974f78-4c17-480b-8a35-285a89f1cb35\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.038080 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-secret-combined-ca-bundle\") pod \"3e974f78-4c17-480b-8a35-285a89f1cb35\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.038106 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-2\") pod \"3e974f78-4c17-480b-8a35-285a89f1cb35\" (UID: \"3e974f78-4c17-480b-8a35-285a89f1cb35\") " Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.038102 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "3e974f78-4c17-480b-8a35-285a89f1cb35" (UID: "3e974f78-4c17-480b-8a35-285a89f1cb35"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.038649 4784 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.038667 4784 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.039220 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "3e974f78-4c17-480b-8a35-285a89f1cb35" (UID: "3e974f78-4c17-480b-8a35-285a89f1cb35"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.046519 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e974f78-4c17-480b-8a35-285a89f1cb35-config-out" (OuterVolumeSpecName: "config-out") pod "3e974f78-4c17-480b-8a35-285a89f1cb35" (UID: "3e974f78-4c17-480b-8a35-285a89f1cb35"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.046587 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e974f78-4c17-480b-8a35-285a89f1cb35-kube-api-access-vc6kd" (OuterVolumeSpecName: "kube-api-access-vc6kd") pod "3e974f78-4c17-480b-8a35-285a89f1cb35" (UID: "3e974f78-4c17-480b-8a35-285a89f1cb35"). InnerVolumeSpecName "kube-api-access-vc6kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.046584 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "3e974f78-4c17-480b-8a35-285a89f1cb35" (UID: "3e974f78-4c17-480b-8a35-285a89f1cb35"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.047520 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-config" (OuterVolumeSpecName: "config") pod "3e974f78-4c17-480b-8a35-285a89f1cb35" (UID: "3e974f78-4c17-480b-8a35-285a89f1cb35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.047925 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "3e974f78-4c17-480b-8a35-285a89f1cb35" (UID: "3e974f78-4c17-480b-8a35-285a89f1cb35"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.048766 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e974f78-4c17-480b-8a35-285a89f1cb35-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "3e974f78-4c17-480b-8a35-285a89f1cb35" (UID: "3e974f78-4c17-480b-8a35-285a89f1cb35"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.052249 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "3e974f78-4c17-480b-8a35-285a89f1cb35" (UID: "3e974f78-4c17-480b-8a35-285a89f1cb35"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.053686 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "3e974f78-4c17-480b-8a35-285a89f1cb35" (UID: "3e974f78-4c17-480b-8a35-285a89f1cb35"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.076066 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "3e974f78-4c17-480b-8a35-285a89f1cb35" (UID: "3e974f78-4c17-480b-8a35-285a89f1cb35"). InnerVolumeSpecName "pvc-e6192221-140f-46e3-a3e7-14d2acad4265". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.123146 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config" (OuterVolumeSpecName: "web-config") pod "3e974f78-4c17-480b-8a35-285a89f1cb35" (UID: "3e974f78-4c17-480b-8a35-285a89f1cb35"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.150681 4784 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.150753 4784 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.150841 4784 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") on node \"crc\" " Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.151144 4784 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.151159 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc6kd\" (UniqueName: \"kubernetes.io/projected/3e974f78-4c17-480b-8a35-285a89f1cb35-kube-api-access-vc6kd\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.151172 4784 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.151185 4784 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e974f78-4c17-480b-8a35-285a89f1cb35-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.151621 4784 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.151642 4784 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e974f78-4c17-480b-8a35-285a89f1cb35-config-out\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.151658 4784 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e974f78-4c17-480b-8a35-285a89f1cb35-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.151669 4784 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3e974f78-4c17-480b-8a35-285a89f1cb35-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.184554 4784 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.185366 4784 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e6192221-140f-46e3-a3e7-14d2acad4265" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265") on node "crc" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.254112 4784 reconciler_common.go:293] "Volume detached for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.355596 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e974f78-4c17-480b-8a35-285a89f1cb35","Type":"ContainerDied","Data":"cd5cf7cffe80c6f4a5d73d06eb17dac1188c4dba2be507956617f40abe0b2abf"} Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.356088 4784 scope.go:117] "RemoveContainer" containerID="d0692b3ffba15cc1bbd3a3e2067b83e61e9daa350591185fb3f1858b49370111" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.355864 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.394305 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.405175 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.405859 4784 scope.go:117] "RemoveContainer" containerID="4d908ed3f9dc347b45dda7638e1e76f6fb1e971d09085a04efeab7668f9a0ddd" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.432817 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 07:11:47 crc kubenswrapper[4784]: E0123 07:11:47.434063 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7256997d-1626-42df-8c41-0487e94eefae" containerName="registry-server" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.434235 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="7256997d-1626-42df-8c41-0487e94eefae" containerName="registry-server" Jan 23 07:11:47 crc kubenswrapper[4784]: E0123 07:11:47.434335 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerName="prometheus" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.434391 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerName="prometheus" Jan 23 07:11:47 crc kubenswrapper[4784]: E0123 07:11:47.434456 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39095423-09e7-4099-8256-b1eab02f4707" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.434516 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="39095423-09e7-4099-8256-b1eab02f4707" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 07:11:47 crc kubenswrapper[4784]: E0123 07:11:47.434579 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerName="init-config-reloader" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.434639 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerName="init-config-reloader" Jan 23 07:11:47 crc kubenswrapper[4784]: E0123 07:11:47.434709 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerName="thanos-sidecar" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.434785 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerName="thanos-sidecar" Jan 23 07:11:47 crc kubenswrapper[4784]: E0123 07:11:47.434883 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7256997d-1626-42df-8c41-0487e94eefae" containerName="extract-content" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.435045 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="7256997d-1626-42df-8c41-0487e94eefae" containerName="extract-content" Jan 23 07:11:47 crc kubenswrapper[4784]: E0123 07:11:47.435140 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7256997d-1626-42df-8c41-0487e94eefae" containerName="extract-utilities" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.435250 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="7256997d-1626-42df-8c41-0487e94eefae" containerName="extract-utilities" Jan 23 07:11:47 crc kubenswrapper[4784]: E0123 07:11:47.435306 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerName="config-reloader" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.435360 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerName="config-reloader" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.435645 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerName="config-reloader" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.435853 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerName="thanos-sidecar" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.435931 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="7256997d-1626-42df-8c41-0487e94eefae" containerName="registry-server" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.436012 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e974f78-4c17-480b-8a35-285a89f1cb35" containerName="prometheus" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.436068 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="39095423-09e7-4099-8256-b1eab02f4707" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.438144 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.443044 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bvsrx" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.443279 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.443422 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.443549 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.444127 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.444275 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.448072 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.450474 4784 scope.go:117] "RemoveContainer" containerID="6a28253ac0032048200290f552c55613ace6ed8d277a52da3224b5099aebf6b7" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.455662 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.488938 4784 scope.go:117] "RemoveContainer" containerID="2444e6e56e66e69329ca6d890998e8774bd28f660539aa049c86704a170fe184" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.556029 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.566106 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.566175 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.566227 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg69f\" (UniqueName: \"kubernetes.io/projected/0ac872c1-b445-4e65-bb7a-47962509618c-kube-api-access-mg69f\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.566266 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0ac872c1-b445-4e65-bb7a-47962509618c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.566298 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.566339 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.566370 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0ac872c1-b445-4e65-bb7a-47962509618c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.566392 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.566415 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.566445 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0ac872c1-b445-4e65-bb7a-47962509618c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.566502 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.566567 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0ac872c1-b445-4e65-bb7a-47962509618c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.566586 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0ac872c1-b445-4e65-bb7a-47962509618c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.668610 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.668700 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0ac872c1-b445-4e65-bb7a-47962509618c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.668728 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0ac872c1-b445-4e65-bb7a-47962509618c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.668769 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.668796 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.668839 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg69f\" (UniqueName: \"kubernetes.io/projected/0ac872c1-b445-4e65-bb7a-47962509618c-kube-api-access-mg69f\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.668870 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0ac872c1-b445-4e65-bb7a-47962509618c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.668909 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.668951 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.668980 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0ac872c1-b445-4e65-bb7a-47962509618c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.669000 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.669024 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.669057 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0ac872c1-b445-4e65-bb7a-47962509618c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.670701 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0ac872c1-b445-4e65-bb7a-47962509618c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.673933 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0ac872c1-b445-4e65-bb7a-47962509618c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.674361 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0ac872c1-b445-4e65-bb7a-47962509618c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.674992 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.675185 4784 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.675216 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/984fdf672f705a078d51f1b73c390067f647610423a2c84302a50834be3d8ee1/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.678419 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0ac872c1-b445-4e65-bb7a-47962509618c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.679212 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0ac872c1-b445-4e65-bb7a-47962509618c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.681739 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.684623 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.685123 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.685751 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.686068 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0ac872c1-b445-4e65-bb7a-47962509618c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.695940 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg69f\" (UniqueName: \"kubernetes.io/projected/0ac872c1-b445-4e65-bb7a-47962509618c-kube-api-access-mg69f\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.729565 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e6192221-140f-46e3-a3e7-14d2acad4265\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6192221-140f-46e3-a3e7-14d2acad4265\") pod \"prometheus-metric-storage-0\" (UID: \"0ac872c1-b445-4e65-bb7a-47962509618c\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:47 crc kubenswrapper[4784]: I0123 07:11:47.784648 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 07:11:48 crc kubenswrapper[4784]: I0123 07:11:48.277038 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 07:11:48 crc kubenswrapper[4784]: I0123 07:11:48.374899 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ac872c1-b445-4e65-bb7a-47962509618c","Type":"ContainerStarted","Data":"ab80796b2bbbfcc4b1793ebea65eef4900ad26ed9d5bb1d6962fdfb364e9b1de"} Jan 23 07:11:49 crc kubenswrapper[4784]: I0123 07:11:49.272022 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e974f78-4c17-480b-8a35-285a89f1cb35" path="/var/lib/kubelet/pods/3e974f78-4c17-480b-8a35-285a89f1cb35/volumes" Jan 23 07:11:53 crc kubenswrapper[4784]: I0123 07:11:53.439469 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ac872c1-b445-4e65-bb7a-47962509618c","Type":"ContainerStarted","Data":"c95e1d665ac0028919e63b46e587fa3c1b90a7c0ab659fe8b79d578567d8cddf"} Jan 23 07:11:55 crc kubenswrapper[4784]: I0123 07:11:55.259662 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:11:55 crc kubenswrapper[4784]: E0123 07:11:55.260652 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:12:01 crc kubenswrapper[4784]: I0123 07:12:01.546042 4784 generic.go:334] "Generic (PLEG): container finished" podID="0ac872c1-b445-4e65-bb7a-47962509618c" containerID="c95e1d665ac0028919e63b46e587fa3c1b90a7c0ab659fe8b79d578567d8cddf" exitCode=0 Jan 23 07:12:01 crc kubenswrapper[4784]: I0123 07:12:01.546193 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ac872c1-b445-4e65-bb7a-47962509618c","Type":"ContainerDied","Data":"c95e1d665ac0028919e63b46e587fa3c1b90a7c0ab659fe8b79d578567d8cddf"} Jan 23 07:12:02 crc kubenswrapper[4784]: I0123 07:12:02.562065 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ac872c1-b445-4e65-bb7a-47962509618c","Type":"ContainerStarted","Data":"f5bfdd4a2379bc4749ee31e5fea2ce0ca62be8aa25b5a8cce73d77cb85fd7172"} Jan 23 07:12:07 crc kubenswrapper[4784]: I0123 07:12:07.627556 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ac872c1-b445-4e65-bb7a-47962509618c","Type":"ContainerStarted","Data":"f7841aed3323c9e2dd12f769acd220af721a915eb65b16edb8983822753d9378"} Jan 23 07:12:07 crc kubenswrapper[4784]: I0123 07:12:07.628503 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0ac872c1-b445-4e65-bb7a-47962509618c","Type":"ContainerStarted","Data":"67804802c05e655fe12c8c63e38c56197c1037d5c057f210e4b0b8a1dc47f321"} Jan 23 07:12:07 crc kubenswrapper[4784]: I0123 07:12:07.687501 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.687477124 podStartE2EDuration="20.687477124s" podCreationTimestamp="2026-01-23 07:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:12:07.671856439 +0000 UTC m=+3130.904364443" watchObservedRunningTime="2026-01-23 07:12:07.687477124 +0000 UTC m=+3130.919985098" Jan 23 07:12:07 crc kubenswrapper[4784]: I0123 07:12:07.785381 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 07:12:09 crc kubenswrapper[4784]: I0123 07:12:09.256611 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:12:09 crc kubenswrapper[4784]: E0123 07:12:09.258599 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:12:17 crc kubenswrapper[4784]: I0123 07:12:17.786018 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 07:12:17 crc kubenswrapper[4784]: I0123 07:12:17.796799 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 07:12:18 crc kubenswrapper[4784]: I0123 07:12:18.784704 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 07:12:22 crc kubenswrapper[4784]: I0123 07:12:22.254056 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:12:22 crc kubenswrapper[4784]: E0123 07:12:22.255308 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:12:34 crc kubenswrapper[4784]: I0123 07:12:34.253970 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:12:34 crc kubenswrapper[4784]: E0123 07:12:34.255200 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.232927 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.236153 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.240115 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.240437 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rmvxh" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.242541 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.243252 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.319887 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.323406 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.323820 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-config-data\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.324099 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.427335 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.427427 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.427459 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnqg8\" (UniqueName: \"kubernetes.io/projected/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-kube-api-access-rnqg8\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.427488 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-config-data\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.427536 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.427576 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.427643 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.427684 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.427810 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.430170 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-config-data\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.430737 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.437070 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.530475 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.530647 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.530810 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.530894 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnqg8\" (UniqueName: \"kubernetes.io/projected/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-kube-api-access-rnqg8\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.530968 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.531022 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.531675 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.532533 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.532904 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.536737 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.540670 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.561480 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnqg8\" (UniqueName: \"kubernetes.io/projected/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-kube-api-access-rnqg8\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.578999 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " pod="openstack/tempest-tests-tempest" Jan 23 07:12:48 crc kubenswrapper[4784]: I0123 07:12:48.601317 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 07:12:49 crc kubenswrapper[4784]: I0123 07:12:49.140659 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 07:12:49 crc kubenswrapper[4784]: I0123 07:12:49.156088 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:12:49 crc kubenswrapper[4784]: I0123 07:12:49.253463 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:12:49 crc kubenswrapper[4784]: E0123 07:12:49.253988 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:12:50 crc kubenswrapper[4784]: I0123 07:12:50.176490 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc","Type":"ContainerStarted","Data":"426b0f9d789d8ea2e21f84db9b42e631b7d2e1f242c74785a05507789d5a4968"} Jan 23 07:13:02 crc kubenswrapper[4784]: I0123 07:13:02.254665 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:13:02 crc kubenswrapper[4784]: E0123 07:13:02.255336 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:13:02 crc kubenswrapper[4784]: E0123 07:13:02.997637 4784 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest" Jan 23 07:13:02 crc kubenswrapper[4784]: E0123 07:13:02.997727 4784 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest" Jan 23 07:13:02 crc kubenswrapper[4784]: E0123 07:13:02.997941 4784 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:38.102.83.50:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnqg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 07:13:02 crc kubenswrapper[4784]: E0123 07:13:02.999253 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc" Jan 23 07:13:03 crc kubenswrapper[4784]: E0123 07:13:03.402689 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.50:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc" Jan 23 07:13:15 crc kubenswrapper[4784]: I0123 07:13:15.255010 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:13:15 crc kubenswrapper[4784]: E0123 07:13:15.256534 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:13:16 crc kubenswrapper[4784]: I0123 07:13:16.348118 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 23 07:13:17 crc kubenswrapper[4784]: I0123 07:13:17.592518 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc","Type":"ContainerStarted","Data":"5c4cf8ba40a9fe304a4ed243b096e14daedc5ba932db30e5d9c5e1f290b9ec9c"} Jan 23 07:13:17 crc kubenswrapper[4784]: I0123 07:13:17.627989 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.441387529 podStartE2EDuration="30.627920348s" podCreationTimestamp="2026-01-23 07:12:47 +0000 UTC" firstStartedPulling="2026-01-23 07:12:49.155854534 +0000 UTC m=+3172.388362508" lastFinishedPulling="2026-01-23 07:13:16.342387343 +0000 UTC m=+3199.574895327" observedRunningTime="2026-01-23 07:13:17.626509583 +0000 UTC m=+3200.859017597" watchObservedRunningTime="2026-01-23 07:13:17.627920348 +0000 UTC m=+3200.860428332" Jan 23 07:13:27 crc kubenswrapper[4784]: I0123 07:13:27.264484 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:13:27 crc kubenswrapper[4784]: E0123 07:13:27.265862 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:13:39 crc kubenswrapper[4784]: I0123 07:13:39.254254 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:13:39 crc kubenswrapper[4784]: E0123 07:13:39.255416 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:13:51 crc kubenswrapper[4784]: I0123 07:13:51.254709 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:13:51 crc kubenswrapper[4784]: E0123 07:13:51.256144 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:14:06 crc kubenswrapper[4784]: I0123 07:14:06.254353 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:14:06 crc kubenswrapper[4784]: E0123 07:14:06.255627 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:14:17 crc kubenswrapper[4784]: I0123 07:14:17.260432 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:14:17 crc kubenswrapper[4784]: E0123 07:14:17.261372 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:14:29 crc kubenswrapper[4784]: I0123 07:14:29.254666 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:14:29 crc kubenswrapper[4784]: E0123 07:14:29.256011 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:14:42 crc kubenswrapper[4784]: I0123 07:14:42.256621 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:14:42 crc kubenswrapper[4784]: E0123 07:14:42.258503 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:14:57 crc kubenswrapper[4784]: I0123 07:14:57.280248 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:14:57 crc kubenswrapper[4784]: E0123 07:14:57.281475 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:15:00 crc kubenswrapper[4784]: I0123 07:15:00.190326 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd"] Jan 23 07:15:00 crc kubenswrapper[4784]: I0123 07:15:00.193842 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" Jan 23 07:15:00 crc kubenswrapper[4784]: I0123 07:15:00.196534 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 07:15:00 crc kubenswrapper[4784]: I0123 07:15:00.197117 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 07:15:00 crc kubenswrapper[4784]: I0123 07:15:00.204918 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd"] Jan 23 07:15:00 crc kubenswrapper[4784]: I0123 07:15:00.224335 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-secret-volume\") pod \"collect-profiles-29485875-bkkqd\" (UID: \"b93e9b23-2b0e-4107-9a5b-74c94e40fc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" Jan 23 07:15:00 crc kubenswrapper[4784]: I0123 07:15:00.225224 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh968\" (UniqueName: \"kubernetes.io/projected/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-kube-api-access-wh968\") pod \"collect-profiles-29485875-bkkqd\" (UID: \"b93e9b23-2b0e-4107-9a5b-74c94e40fc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" Jan 23 07:15:00 crc kubenswrapper[4784]: I0123 07:15:00.225304 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-config-volume\") pod \"collect-profiles-29485875-bkkqd\" (UID: \"b93e9b23-2b0e-4107-9a5b-74c94e40fc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" Jan 23 07:15:00 crc kubenswrapper[4784]: I0123 07:15:00.327025 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh968\" (UniqueName: \"kubernetes.io/projected/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-kube-api-access-wh968\") pod \"collect-profiles-29485875-bkkqd\" (UID: \"b93e9b23-2b0e-4107-9a5b-74c94e40fc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" Jan 23 07:15:00 crc kubenswrapper[4784]: I0123 07:15:00.327085 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-config-volume\") pod \"collect-profiles-29485875-bkkqd\" (UID: \"b93e9b23-2b0e-4107-9a5b-74c94e40fc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" Jan 23 07:15:00 crc kubenswrapper[4784]: I0123 07:15:00.327157 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-secret-volume\") pod \"collect-profiles-29485875-bkkqd\" (UID: \"b93e9b23-2b0e-4107-9a5b-74c94e40fc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" Jan 23 07:15:00 crc kubenswrapper[4784]: I0123 07:15:00.329357 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-config-volume\") pod \"collect-profiles-29485875-bkkqd\" (UID: \"b93e9b23-2b0e-4107-9a5b-74c94e40fc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" Jan 23 07:15:00 crc kubenswrapper[4784]: I0123 07:15:00.336851 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-secret-volume\") pod \"collect-profiles-29485875-bkkqd\" (UID: \"b93e9b23-2b0e-4107-9a5b-74c94e40fc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" Jan 23 07:15:00 crc kubenswrapper[4784]: I0123 07:15:00.344951 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh968\" (UniqueName: \"kubernetes.io/projected/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-kube-api-access-wh968\") pod \"collect-profiles-29485875-bkkqd\" (UID: \"b93e9b23-2b0e-4107-9a5b-74c94e40fc62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" Jan 23 07:15:00 crc kubenswrapper[4784]: I0123 07:15:00.555689 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" Jan 23 07:15:01 crc kubenswrapper[4784]: I0123 07:15:01.147345 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd"] Jan 23 07:15:01 crc kubenswrapper[4784]: W0123 07:15:01.148389 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb93e9b23_2b0e_4107_9a5b_74c94e40fc62.slice/crio-60ea45da4adfee1d779e6d71187fd781f2ed55c3ecf26d09784808ac41f5a393 WatchSource:0}: Error finding container 60ea45da4adfee1d779e6d71187fd781f2ed55c3ecf26d09784808ac41f5a393: Status 404 returned error can't find the container with id 60ea45da4adfee1d779e6d71187fd781f2ed55c3ecf26d09784808ac41f5a393 Jan 23 07:15:02 crc kubenswrapper[4784]: I0123 07:15:02.109963 4784 generic.go:334] "Generic (PLEG): container finished" podID="b93e9b23-2b0e-4107-9a5b-74c94e40fc62" containerID="35d9370bf55163a1a28a3e46db6a38da12559a2e1ceca61745d8d494b57c9947" exitCode=0 Jan 23 07:15:02 crc kubenswrapper[4784]: I0123 07:15:02.110085 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" event={"ID":"b93e9b23-2b0e-4107-9a5b-74c94e40fc62","Type":"ContainerDied","Data":"35d9370bf55163a1a28a3e46db6a38da12559a2e1ceca61745d8d494b57c9947"} Jan 23 07:15:02 crc kubenswrapper[4784]: I0123 07:15:02.110591 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" event={"ID":"b93e9b23-2b0e-4107-9a5b-74c94e40fc62","Type":"ContainerStarted","Data":"60ea45da4adfee1d779e6d71187fd781f2ed55c3ecf26d09784808ac41f5a393"} Jan 23 07:15:03 crc kubenswrapper[4784]: I0123 07:15:03.569006 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" Jan 23 07:15:03 crc kubenswrapper[4784]: I0123 07:15:03.731307 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-secret-volume\") pod \"b93e9b23-2b0e-4107-9a5b-74c94e40fc62\" (UID: \"b93e9b23-2b0e-4107-9a5b-74c94e40fc62\") " Jan 23 07:15:03 crc kubenswrapper[4784]: I0123 07:15:03.731799 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh968\" (UniqueName: \"kubernetes.io/projected/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-kube-api-access-wh968\") pod \"b93e9b23-2b0e-4107-9a5b-74c94e40fc62\" (UID: \"b93e9b23-2b0e-4107-9a5b-74c94e40fc62\") " Jan 23 07:15:03 crc kubenswrapper[4784]: I0123 07:15:03.732083 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-config-volume\") pod \"b93e9b23-2b0e-4107-9a5b-74c94e40fc62\" (UID: \"b93e9b23-2b0e-4107-9a5b-74c94e40fc62\") " Jan 23 07:15:03 crc kubenswrapper[4784]: I0123 07:15:03.733007 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-config-volume" (OuterVolumeSpecName: "config-volume") pod "b93e9b23-2b0e-4107-9a5b-74c94e40fc62" (UID: "b93e9b23-2b0e-4107-9a5b-74c94e40fc62"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:15:03 crc kubenswrapper[4784]: I0123 07:15:03.745122 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b93e9b23-2b0e-4107-9a5b-74c94e40fc62" (UID: "b93e9b23-2b0e-4107-9a5b-74c94e40fc62"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:15:03 crc kubenswrapper[4784]: I0123 07:15:03.746062 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-kube-api-access-wh968" (OuterVolumeSpecName: "kube-api-access-wh968") pod "b93e9b23-2b0e-4107-9a5b-74c94e40fc62" (UID: "b93e9b23-2b0e-4107-9a5b-74c94e40fc62"). InnerVolumeSpecName "kube-api-access-wh968". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:15:03 crc kubenswrapper[4784]: I0123 07:15:03.835305 4784 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:15:03 crc kubenswrapper[4784]: I0123 07:15:03.835360 4784 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:15:03 crc kubenswrapper[4784]: I0123 07:15:03.835377 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh968\" (UniqueName: \"kubernetes.io/projected/b93e9b23-2b0e-4107-9a5b-74c94e40fc62-kube-api-access-wh968\") on node \"crc\" DevicePath \"\"" Jan 23 07:15:04 crc kubenswrapper[4784]: I0123 07:15:04.135390 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" event={"ID":"b93e9b23-2b0e-4107-9a5b-74c94e40fc62","Type":"ContainerDied","Data":"60ea45da4adfee1d779e6d71187fd781f2ed55c3ecf26d09784808ac41f5a393"} Jan 23 07:15:04 crc kubenswrapper[4784]: I0123 07:15:04.135445 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60ea45da4adfee1d779e6d71187fd781f2ed55c3ecf26d09784808ac41f5a393" Jan 23 07:15:04 crc kubenswrapper[4784]: I0123 07:15:04.135500 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd" Jan 23 07:15:04 crc kubenswrapper[4784]: I0123 07:15:04.669090 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf"] Jan 23 07:15:04 crc kubenswrapper[4784]: I0123 07:15:04.682439 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485830-v84rf"] Jan 23 07:15:05 crc kubenswrapper[4784]: I0123 07:15:05.276887 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f3d5c55-d207-432d-8236-64168a40935b" path="/var/lib/kubelet/pods/8f3d5c55-d207-432d-8236-64168a40935b/volumes" Jan 23 07:15:08 crc kubenswrapper[4784]: I0123 07:15:08.254953 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:15:08 crc kubenswrapper[4784]: E0123 07:15:08.256086 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:15:19 crc kubenswrapper[4784]: I0123 07:15:19.254481 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:15:19 crc kubenswrapper[4784]: E0123 07:15:19.255435 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:15:31 crc kubenswrapper[4784]: I0123 07:15:31.254516 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:15:31 crc kubenswrapper[4784]: E0123 07:15:31.255345 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:15:38 crc kubenswrapper[4784]: I0123 07:15:38.120928 4784 scope.go:117] "RemoveContainer" containerID="a397f5aa10b572aa7a5ff0fa37600ce4dfe68922557e1355aff08995b3569af7" Jan 23 07:15:44 crc kubenswrapper[4784]: I0123 07:15:44.255059 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:15:44 crc kubenswrapper[4784]: E0123 07:15:44.256495 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.178679 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w4sqp"] Jan 23 07:15:58 crc kubenswrapper[4784]: E0123 07:15:58.180336 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93e9b23-2b0e-4107-9a5b-74c94e40fc62" containerName="collect-profiles" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.180358 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93e9b23-2b0e-4107-9a5b-74c94e40fc62" containerName="collect-profiles" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.180638 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93e9b23-2b0e-4107-9a5b-74c94e40fc62" containerName="collect-profiles" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.183029 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.231557 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxqxw\" (UniqueName: \"kubernetes.io/projected/d7735745-6386-4504-826b-21e3532d85d8-kube-api-access-xxqxw\") pod \"redhat-marketplace-w4sqp\" (UID: \"d7735745-6386-4504-826b-21e3532d85d8\") " pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.231616 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7735745-6386-4504-826b-21e3532d85d8-utilities\") pod \"redhat-marketplace-w4sqp\" (UID: \"d7735745-6386-4504-826b-21e3532d85d8\") " pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.231855 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7735745-6386-4504-826b-21e3532d85d8-catalog-content\") pod \"redhat-marketplace-w4sqp\" (UID: \"d7735745-6386-4504-826b-21e3532d85d8\") " pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.240347 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w4sqp"] Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.255026 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.335034 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7735745-6386-4504-826b-21e3532d85d8-catalog-content\") pod \"redhat-marketplace-w4sqp\" (UID: \"d7735745-6386-4504-826b-21e3532d85d8\") " pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.335876 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxqxw\" (UniqueName: \"kubernetes.io/projected/d7735745-6386-4504-826b-21e3532d85d8-kube-api-access-xxqxw\") pod \"redhat-marketplace-w4sqp\" (UID: \"d7735745-6386-4504-826b-21e3532d85d8\") " pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.335908 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7735745-6386-4504-826b-21e3532d85d8-utilities\") pod \"redhat-marketplace-w4sqp\" (UID: \"d7735745-6386-4504-826b-21e3532d85d8\") " pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.337286 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7735745-6386-4504-826b-21e3532d85d8-utilities\") pod \"redhat-marketplace-w4sqp\" (UID: \"d7735745-6386-4504-826b-21e3532d85d8\") " pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.337645 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7735745-6386-4504-826b-21e3532d85d8-catalog-content\") pod \"redhat-marketplace-w4sqp\" (UID: \"d7735745-6386-4504-826b-21e3532d85d8\") " pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.357105 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxqxw\" (UniqueName: \"kubernetes.io/projected/d7735745-6386-4504-826b-21e3532d85d8-kube-api-access-xxqxw\") pod \"redhat-marketplace-w4sqp\" (UID: \"d7735745-6386-4504-826b-21e3532d85d8\") " pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.535840 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:15:58 crc kubenswrapper[4784]: I0123 07:15:58.849244 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"d338b49f23099312d309fec3a4ee34e5faf06f7670aa1a5dfe8475370de40deb"} Jan 23 07:15:59 crc kubenswrapper[4784]: I0123 07:15:59.146784 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w4sqp"] Jan 23 07:15:59 crc kubenswrapper[4784]: W0123 07:15:59.157617 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7735745_6386_4504_826b_21e3532d85d8.slice/crio-95d538d0d158d7915dfaee44d1ea15d6b5e246d539ec49653ca1f9839c4e6fb6 WatchSource:0}: Error finding container 95d538d0d158d7915dfaee44d1ea15d6b5e246d539ec49653ca1f9839c4e6fb6: Status 404 returned error can't find the container with id 95d538d0d158d7915dfaee44d1ea15d6b5e246d539ec49653ca1f9839c4e6fb6 Jan 23 07:15:59 crc kubenswrapper[4784]: I0123 07:15:59.866787 4784 generic.go:334] "Generic (PLEG): container finished" podID="d7735745-6386-4504-826b-21e3532d85d8" containerID="433401558ff79e7544243522bd1173cc7074ea8bba92ed8abf592138626c667f" exitCode=0 Jan 23 07:15:59 crc kubenswrapper[4784]: I0123 07:15:59.867042 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w4sqp" event={"ID":"d7735745-6386-4504-826b-21e3532d85d8","Type":"ContainerDied","Data":"433401558ff79e7544243522bd1173cc7074ea8bba92ed8abf592138626c667f"} Jan 23 07:15:59 crc kubenswrapper[4784]: I0123 07:15:59.867471 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w4sqp" event={"ID":"d7735745-6386-4504-826b-21e3532d85d8","Type":"ContainerStarted","Data":"95d538d0d158d7915dfaee44d1ea15d6b5e246d539ec49653ca1f9839c4e6fb6"} Jan 23 07:16:01 crc kubenswrapper[4784]: I0123 07:16:01.912531 4784 generic.go:334] "Generic (PLEG): container finished" podID="d7735745-6386-4504-826b-21e3532d85d8" containerID="f9da641027f6198cab18684faeb20326d3f818b0b797cb4aaac1afe388cb1e84" exitCode=0 Jan 23 07:16:01 crc kubenswrapper[4784]: I0123 07:16:01.913476 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w4sqp" event={"ID":"d7735745-6386-4504-826b-21e3532d85d8","Type":"ContainerDied","Data":"f9da641027f6198cab18684faeb20326d3f818b0b797cb4aaac1afe388cb1e84"} Jan 23 07:16:02 crc kubenswrapper[4784]: I0123 07:16:02.928008 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w4sqp" event={"ID":"d7735745-6386-4504-826b-21e3532d85d8","Type":"ContainerStarted","Data":"c6f469ff3cead23177d95d36549f7a3863322108ead258c481e40e131d38ca9a"} Jan 23 07:16:02 crc kubenswrapper[4784]: I0123 07:16:02.960690 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w4sqp" podStartSLOduration=2.440095967 podStartE2EDuration="4.960631108s" podCreationTimestamp="2026-01-23 07:15:58 +0000 UTC" firstStartedPulling="2026-01-23 07:15:59.8706176 +0000 UTC m=+3363.103125584" lastFinishedPulling="2026-01-23 07:16:02.391152751 +0000 UTC m=+3365.623660725" observedRunningTime="2026-01-23 07:16:02.958464724 +0000 UTC m=+3366.190972698" watchObservedRunningTime="2026-01-23 07:16:02.960631108 +0000 UTC m=+3366.193139102" Jan 23 07:16:08 crc kubenswrapper[4784]: I0123 07:16:08.536503 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:16:08 crc kubenswrapper[4784]: I0123 07:16:08.537318 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:16:08 crc kubenswrapper[4784]: I0123 07:16:08.597540 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:16:09 crc kubenswrapper[4784]: I0123 07:16:09.054786 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:16:09 crc kubenswrapper[4784]: I0123 07:16:09.122625 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w4sqp"] Jan 23 07:16:11 crc kubenswrapper[4784]: I0123 07:16:11.019368 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w4sqp" podUID="d7735745-6386-4504-826b-21e3532d85d8" containerName="registry-server" containerID="cri-o://c6f469ff3cead23177d95d36549f7a3863322108ead258c481e40e131d38ca9a" gracePeriod=2 Jan 23 07:16:11 crc kubenswrapper[4784]: I0123 07:16:11.592462 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:16:11 crc kubenswrapper[4784]: I0123 07:16:11.707570 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7735745-6386-4504-826b-21e3532d85d8-utilities\") pod \"d7735745-6386-4504-826b-21e3532d85d8\" (UID: \"d7735745-6386-4504-826b-21e3532d85d8\") " Jan 23 07:16:11 crc kubenswrapper[4784]: I0123 07:16:11.708045 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7735745-6386-4504-826b-21e3532d85d8-catalog-content\") pod \"d7735745-6386-4504-826b-21e3532d85d8\" (UID: \"d7735745-6386-4504-826b-21e3532d85d8\") " Jan 23 07:16:11 crc kubenswrapper[4784]: I0123 07:16:11.708337 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxqxw\" (UniqueName: \"kubernetes.io/projected/d7735745-6386-4504-826b-21e3532d85d8-kube-api-access-xxqxw\") pod \"d7735745-6386-4504-826b-21e3532d85d8\" (UID: \"d7735745-6386-4504-826b-21e3532d85d8\") " Jan 23 07:16:11 crc kubenswrapper[4784]: I0123 07:16:11.708694 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7735745-6386-4504-826b-21e3532d85d8-utilities" (OuterVolumeSpecName: "utilities") pod "d7735745-6386-4504-826b-21e3532d85d8" (UID: "d7735745-6386-4504-826b-21e3532d85d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:16:11 crc kubenswrapper[4784]: I0123 07:16:11.710039 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7735745-6386-4504-826b-21e3532d85d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:16:11 crc kubenswrapper[4784]: I0123 07:16:11.716915 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7735745-6386-4504-826b-21e3532d85d8-kube-api-access-xxqxw" (OuterVolumeSpecName: "kube-api-access-xxqxw") pod "d7735745-6386-4504-826b-21e3532d85d8" (UID: "d7735745-6386-4504-826b-21e3532d85d8"). InnerVolumeSpecName "kube-api-access-xxqxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:16:11 crc kubenswrapper[4784]: I0123 07:16:11.735054 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7735745-6386-4504-826b-21e3532d85d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7735745-6386-4504-826b-21e3532d85d8" (UID: "d7735745-6386-4504-826b-21e3532d85d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:16:11 crc kubenswrapper[4784]: I0123 07:16:11.812157 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxqxw\" (UniqueName: \"kubernetes.io/projected/d7735745-6386-4504-826b-21e3532d85d8-kube-api-access-xxqxw\") on node \"crc\" DevicePath \"\"" Jan 23 07:16:11 crc kubenswrapper[4784]: I0123 07:16:11.812195 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7735745-6386-4504-826b-21e3532d85d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:16:12 crc kubenswrapper[4784]: I0123 07:16:12.032267 4784 generic.go:334] "Generic (PLEG): container finished" podID="d7735745-6386-4504-826b-21e3532d85d8" containerID="c6f469ff3cead23177d95d36549f7a3863322108ead258c481e40e131d38ca9a" exitCode=0 Jan 23 07:16:12 crc kubenswrapper[4784]: I0123 07:16:12.032338 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w4sqp" event={"ID":"d7735745-6386-4504-826b-21e3532d85d8","Type":"ContainerDied","Data":"c6f469ff3cead23177d95d36549f7a3863322108ead258c481e40e131d38ca9a"} Jan 23 07:16:12 crc kubenswrapper[4784]: I0123 07:16:12.032438 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w4sqp" event={"ID":"d7735745-6386-4504-826b-21e3532d85d8","Type":"ContainerDied","Data":"95d538d0d158d7915dfaee44d1ea15d6b5e246d539ec49653ca1f9839c4e6fb6"} Jan 23 07:16:12 crc kubenswrapper[4784]: I0123 07:16:12.032477 4784 scope.go:117] "RemoveContainer" containerID="c6f469ff3cead23177d95d36549f7a3863322108ead258c481e40e131d38ca9a" Jan 23 07:16:12 crc kubenswrapper[4784]: I0123 07:16:12.034244 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w4sqp" Jan 23 07:16:12 crc kubenswrapper[4784]: I0123 07:16:12.059599 4784 scope.go:117] "RemoveContainer" containerID="f9da641027f6198cab18684faeb20326d3f818b0b797cb4aaac1afe388cb1e84" Jan 23 07:16:12 crc kubenswrapper[4784]: I0123 07:16:12.079671 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w4sqp"] Jan 23 07:16:12 crc kubenswrapper[4784]: I0123 07:16:12.090442 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w4sqp"] Jan 23 07:16:12 crc kubenswrapper[4784]: I0123 07:16:12.095913 4784 scope.go:117] "RemoveContainer" containerID="433401558ff79e7544243522bd1173cc7074ea8bba92ed8abf592138626c667f" Jan 23 07:16:12 crc kubenswrapper[4784]: I0123 07:16:12.147203 4784 scope.go:117] "RemoveContainer" containerID="c6f469ff3cead23177d95d36549f7a3863322108ead258c481e40e131d38ca9a" Jan 23 07:16:12 crc kubenswrapper[4784]: E0123 07:16:12.148167 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f469ff3cead23177d95d36549f7a3863322108ead258c481e40e131d38ca9a\": container with ID starting with c6f469ff3cead23177d95d36549f7a3863322108ead258c481e40e131d38ca9a not found: ID does not exist" containerID="c6f469ff3cead23177d95d36549f7a3863322108ead258c481e40e131d38ca9a" Jan 23 07:16:12 crc kubenswrapper[4784]: I0123 07:16:12.148241 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f469ff3cead23177d95d36549f7a3863322108ead258c481e40e131d38ca9a"} err="failed to get container status \"c6f469ff3cead23177d95d36549f7a3863322108ead258c481e40e131d38ca9a\": rpc error: code = NotFound desc = could not find container \"c6f469ff3cead23177d95d36549f7a3863322108ead258c481e40e131d38ca9a\": container with ID starting with c6f469ff3cead23177d95d36549f7a3863322108ead258c481e40e131d38ca9a not found: ID does not exist" Jan 23 07:16:12 crc kubenswrapper[4784]: I0123 07:16:12.148284 4784 scope.go:117] "RemoveContainer" containerID="f9da641027f6198cab18684faeb20326d3f818b0b797cb4aaac1afe388cb1e84" Jan 23 07:16:12 crc kubenswrapper[4784]: E0123 07:16:12.148886 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9da641027f6198cab18684faeb20326d3f818b0b797cb4aaac1afe388cb1e84\": container with ID starting with f9da641027f6198cab18684faeb20326d3f818b0b797cb4aaac1afe388cb1e84 not found: ID does not exist" containerID="f9da641027f6198cab18684faeb20326d3f818b0b797cb4aaac1afe388cb1e84" Jan 23 07:16:12 crc kubenswrapper[4784]: I0123 07:16:12.148974 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9da641027f6198cab18684faeb20326d3f818b0b797cb4aaac1afe388cb1e84"} err="failed to get container status \"f9da641027f6198cab18684faeb20326d3f818b0b797cb4aaac1afe388cb1e84\": rpc error: code = NotFound desc = could not find container \"f9da641027f6198cab18684faeb20326d3f818b0b797cb4aaac1afe388cb1e84\": container with ID starting with f9da641027f6198cab18684faeb20326d3f818b0b797cb4aaac1afe388cb1e84 not found: ID does not exist" Jan 23 07:16:12 crc kubenswrapper[4784]: I0123 07:16:12.149071 4784 scope.go:117] "RemoveContainer" containerID="433401558ff79e7544243522bd1173cc7074ea8bba92ed8abf592138626c667f" Jan 23 07:16:12 crc kubenswrapper[4784]: E0123 07:16:12.149726 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"433401558ff79e7544243522bd1173cc7074ea8bba92ed8abf592138626c667f\": container with ID starting with 433401558ff79e7544243522bd1173cc7074ea8bba92ed8abf592138626c667f not found: ID does not exist" containerID="433401558ff79e7544243522bd1173cc7074ea8bba92ed8abf592138626c667f" Jan 23 07:16:12 crc kubenswrapper[4784]: I0123 07:16:12.149834 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433401558ff79e7544243522bd1173cc7074ea8bba92ed8abf592138626c667f"} err="failed to get container status \"433401558ff79e7544243522bd1173cc7074ea8bba92ed8abf592138626c667f\": rpc error: code = NotFound desc = could not find container \"433401558ff79e7544243522bd1173cc7074ea8bba92ed8abf592138626c667f\": container with ID starting with 433401558ff79e7544243522bd1173cc7074ea8bba92ed8abf592138626c667f not found: ID does not exist" Jan 23 07:16:13 crc kubenswrapper[4784]: I0123 07:16:13.270300 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7735745-6386-4504-826b-21e3532d85d8" path="/var/lib/kubelet/pods/d7735745-6386-4504-826b-21e3532d85d8/volumes" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.091978 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4xnss"] Jan 23 07:17:05 crc kubenswrapper[4784]: E0123 07:17:05.093796 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7735745-6386-4504-826b-21e3532d85d8" containerName="registry-server" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.093836 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7735745-6386-4504-826b-21e3532d85d8" containerName="registry-server" Jan 23 07:17:05 crc kubenswrapper[4784]: E0123 07:17:05.093863 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7735745-6386-4504-826b-21e3532d85d8" containerName="extract-utilities" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.093875 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7735745-6386-4504-826b-21e3532d85d8" containerName="extract-utilities" Jan 23 07:17:05 crc kubenswrapper[4784]: E0123 07:17:05.093898 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7735745-6386-4504-826b-21e3532d85d8" containerName="extract-content" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.093929 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7735745-6386-4504-826b-21e3532d85d8" containerName="extract-content" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.094341 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7735745-6386-4504-826b-21e3532d85d8" containerName="registry-server" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.098654 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.115291 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4xnss"] Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.128782 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gt22\" (UniqueName: \"kubernetes.io/projected/56ff0623-6545-40a8-9f1c-ca7e982c7780-kube-api-access-6gt22\") pod \"certified-operators-4xnss\" (UID: \"56ff0623-6545-40a8-9f1c-ca7e982c7780\") " pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.128985 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ff0623-6545-40a8-9f1c-ca7e982c7780-catalog-content\") pod \"certified-operators-4xnss\" (UID: \"56ff0623-6545-40a8-9f1c-ca7e982c7780\") " pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.129120 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ff0623-6545-40a8-9f1c-ca7e982c7780-utilities\") pod \"certified-operators-4xnss\" (UID: \"56ff0623-6545-40a8-9f1c-ca7e982c7780\") " pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.232488 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ff0623-6545-40a8-9f1c-ca7e982c7780-utilities\") pod \"certified-operators-4xnss\" (UID: \"56ff0623-6545-40a8-9f1c-ca7e982c7780\") " pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.232607 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gt22\" (UniqueName: \"kubernetes.io/projected/56ff0623-6545-40a8-9f1c-ca7e982c7780-kube-api-access-6gt22\") pod \"certified-operators-4xnss\" (UID: \"56ff0623-6545-40a8-9f1c-ca7e982c7780\") " pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.232871 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ff0623-6545-40a8-9f1c-ca7e982c7780-catalog-content\") pod \"certified-operators-4xnss\" (UID: \"56ff0623-6545-40a8-9f1c-ca7e982c7780\") " pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.233387 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ff0623-6545-40a8-9f1c-ca7e982c7780-utilities\") pod \"certified-operators-4xnss\" (UID: \"56ff0623-6545-40a8-9f1c-ca7e982c7780\") " pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.233554 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ff0623-6545-40a8-9f1c-ca7e982c7780-catalog-content\") pod \"certified-operators-4xnss\" (UID: \"56ff0623-6545-40a8-9f1c-ca7e982c7780\") " pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.259067 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gt22\" (UniqueName: \"kubernetes.io/projected/56ff0623-6545-40a8-9f1c-ca7e982c7780-kube-api-access-6gt22\") pod \"certified-operators-4xnss\" (UID: \"56ff0623-6545-40a8-9f1c-ca7e982c7780\") " pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:05 crc kubenswrapper[4784]: I0123 07:17:05.452091 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:06 crc kubenswrapper[4784]: I0123 07:17:06.048426 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4xnss"] Jan 23 07:17:06 crc kubenswrapper[4784]: I0123 07:17:06.836771 4784 generic.go:334] "Generic (PLEG): container finished" podID="56ff0623-6545-40a8-9f1c-ca7e982c7780" containerID="0979bcde9f1caa0f793e8ea9bc6da01636d1521308b8877cb7761edcd24e909c" exitCode=0 Jan 23 07:17:06 crc kubenswrapper[4784]: I0123 07:17:06.836884 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xnss" event={"ID":"56ff0623-6545-40a8-9f1c-ca7e982c7780","Type":"ContainerDied","Data":"0979bcde9f1caa0f793e8ea9bc6da01636d1521308b8877cb7761edcd24e909c"} Jan 23 07:17:06 crc kubenswrapper[4784]: I0123 07:17:06.837244 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xnss" event={"ID":"56ff0623-6545-40a8-9f1c-ca7e982c7780","Type":"ContainerStarted","Data":"61a5f09b4022dd1c573381b6a2b3961ec1d9832be5042db80268802d32d455e5"} Jan 23 07:17:07 crc kubenswrapper[4784]: I0123 07:17:07.850654 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xnss" event={"ID":"56ff0623-6545-40a8-9f1c-ca7e982c7780","Type":"ContainerStarted","Data":"bc1f7bc89c6d2c820854d355d0d8d81b2ebc0f8b022f5dbf00de0a6630342a67"} Jan 23 07:17:09 crc kubenswrapper[4784]: I0123 07:17:09.877335 4784 generic.go:334] "Generic (PLEG): container finished" podID="56ff0623-6545-40a8-9f1c-ca7e982c7780" containerID="bc1f7bc89c6d2c820854d355d0d8d81b2ebc0f8b022f5dbf00de0a6630342a67" exitCode=0 Jan 23 07:17:09 crc kubenswrapper[4784]: I0123 07:17:09.877398 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xnss" event={"ID":"56ff0623-6545-40a8-9f1c-ca7e982c7780","Type":"ContainerDied","Data":"bc1f7bc89c6d2c820854d355d0d8d81b2ebc0f8b022f5dbf00de0a6630342a67"} Jan 23 07:17:10 crc kubenswrapper[4784]: I0123 07:17:10.902524 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xnss" event={"ID":"56ff0623-6545-40a8-9f1c-ca7e982c7780","Type":"ContainerStarted","Data":"ecfe1ac14885338ab482a2f090a5955a8d4e9e9d787d81cd4f232e0fd6764e2b"} Jan 23 07:17:15 crc kubenswrapper[4784]: I0123 07:17:15.452935 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:15 crc kubenswrapper[4784]: I0123 07:17:15.453768 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:15 crc kubenswrapper[4784]: I0123 07:17:15.508436 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:15 crc kubenswrapper[4784]: I0123 07:17:15.537612 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4xnss" podStartSLOduration=7.05736062 podStartE2EDuration="10.537589767s" podCreationTimestamp="2026-01-23 07:17:05 +0000 UTC" firstStartedPulling="2026-01-23 07:17:06.839708609 +0000 UTC m=+3430.072216593" lastFinishedPulling="2026-01-23 07:17:10.319937766 +0000 UTC m=+3433.552445740" observedRunningTime="2026-01-23 07:17:10.964776744 +0000 UTC m=+3434.197284718" watchObservedRunningTime="2026-01-23 07:17:15.537589767 +0000 UTC m=+3438.770097741" Jan 23 07:17:16 crc kubenswrapper[4784]: I0123 07:17:16.033256 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:16 crc kubenswrapper[4784]: I0123 07:17:16.163037 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4xnss"] Jan 23 07:17:17 crc kubenswrapper[4784]: I0123 07:17:17.994470 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4xnss" podUID="56ff0623-6545-40a8-9f1c-ca7e982c7780" containerName="registry-server" containerID="cri-o://ecfe1ac14885338ab482a2f090a5955a8d4e9e9d787d81cd4f232e0fd6764e2b" gracePeriod=2 Jan 23 07:17:18 crc kubenswrapper[4784]: I0123 07:17:18.599017 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:18 crc kubenswrapper[4784]: I0123 07:17:18.711796 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ff0623-6545-40a8-9f1c-ca7e982c7780-utilities\") pod \"56ff0623-6545-40a8-9f1c-ca7e982c7780\" (UID: \"56ff0623-6545-40a8-9f1c-ca7e982c7780\") " Jan 23 07:17:18 crc kubenswrapper[4784]: I0123 07:17:18.711959 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ff0623-6545-40a8-9f1c-ca7e982c7780-catalog-content\") pod \"56ff0623-6545-40a8-9f1c-ca7e982c7780\" (UID: \"56ff0623-6545-40a8-9f1c-ca7e982c7780\") " Jan 23 07:17:18 crc kubenswrapper[4784]: I0123 07:17:18.711992 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gt22\" (UniqueName: \"kubernetes.io/projected/56ff0623-6545-40a8-9f1c-ca7e982c7780-kube-api-access-6gt22\") pod \"56ff0623-6545-40a8-9f1c-ca7e982c7780\" (UID: \"56ff0623-6545-40a8-9f1c-ca7e982c7780\") " Jan 23 07:17:18 crc kubenswrapper[4784]: I0123 07:17:18.713943 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56ff0623-6545-40a8-9f1c-ca7e982c7780-utilities" (OuterVolumeSpecName: "utilities") pod "56ff0623-6545-40a8-9f1c-ca7e982c7780" (UID: "56ff0623-6545-40a8-9f1c-ca7e982c7780"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:17:18 crc kubenswrapper[4784]: I0123 07:17:18.720342 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ff0623-6545-40a8-9f1c-ca7e982c7780-kube-api-access-6gt22" (OuterVolumeSpecName: "kube-api-access-6gt22") pod "56ff0623-6545-40a8-9f1c-ca7e982c7780" (UID: "56ff0623-6545-40a8-9f1c-ca7e982c7780"). InnerVolumeSpecName "kube-api-access-6gt22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:17:18 crc kubenswrapper[4784]: I0123 07:17:18.776899 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56ff0623-6545-40a8-9f1c-ca7e982c7780-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56ff0623-6545-40a8-9f1c-ca7e982c7780" (UID: "56ff0623-6545-40a8-9f1c-ca7e982c7780"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:17:18 crc kubenswrapper[4784]: I0123 07:17:18.815192 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ff0623-6545-40a8-9f1c-ca7e982c7780-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:17:18 crc kubenswrapper[4784]: I0123 07:17:18.815217 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ff0623-6545-40a8-9f1c-ca7e982c7780-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:17:18 crc kubenswrapper[4784]: I0123 07:17:18.815233 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gt22\" (UniqueName: \"kubernetes.io/projected/56ff0623-6545-40a8-9f1c-ca7e982c7780-kube-api-access-6gt22\") on node \"crc\" DevicePath \"\"" Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.011630 4784 generic.go:334] "Generic (PLEG): container finished" podID="56ff0623-6545-40a8-9f1c-ca7e982c7780" containerID="ecfe1ac14885338ab482a2f090a5955a8d4e9e9d787d81cd4f232e0fd6764e2b" exitCode=0 Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.011953 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xnss" event={"ID":"56ff0623-6545-40a8-9f1c-ca7e982c7780","Type":"ContainerDied","Data":"ecfe1ac14885338ab482a2f090a5955a8d4e9e9d787d81cd4f232e0fd6764e2b"} Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.011997 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xnss" Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.013152 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xnss" event={"ID":"56ff0623-6545-40a8-9f1c-ca7e982c7780","Type":"ContainerDied","Data":"61a5f09b4022dd1c573381b6a2b3961ec1d9832be5042db80268802d32d455e5"} Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.013219 4784 scope.go:117] "RemoveContainer" containerID="ecfe1ac14885338ab482a2f090a5955a8d4e9e9d787d81cd4f232e0fd6764e2b" Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.054633 4784 scope.go:117] "RemoveContainer" containerID="bc1f7bc89c6d2c820854d355d0d8d81b2ebc0f8b022f5dbf00de0a6630342a67" Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.054876 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4xnss"] Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.066408 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4xnss"] Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.080392 4784 scope.go:117] "RemoveContainer" containerID="0979bcde9f1caa0f793e8ea9bc6da01636d1521308b8877cb7761edcd24e909c" Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.146815 4784 scope.go:117] "RemoveContainer" containerID="ecfe1ac14885338ab482a2f090a5955a8d4e9e9d787d81cd4f232e0fd6764e2b" Jan 23 07:17:19 crc kubenswrapper[4784]: E0123 07:17:19.147390 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecfe1ac14885338ab482a2f090a5955a8d4e9e9d787d81cd4f232e0fd6764e2b\": container with ID starting with ecfe1ac14885338ab482a2f090a5955a8d4e9e9d787d81cd4f232e0fd6764e2b not found: ID does not exist" containerID="ecfe1ac14885338ab482a2f090a5955a8d4e9e9d787d81cd4f232e0fd6764e2b" Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.147432 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecfe1ac14885338ab482a2f090a5955a8d4e9e9d787d81cd4f232e0fd6764e2b"} err="failed to get container status \"ecfe1ac14885338ab482a2f090a5955a8d4e9e9d787d81cd4f232e0fd6764e2b\": rpc error: code = NotFound desc = could not find container \"ecfe1ac14885338ab482a2f090a5955a8d4e9e9d787d81cd4f232e0fd6764e2b\": container with ID starting with ecfe1ac14885338ab482a2f090a5955a8d4e9e9d787d81cd4f232e0fd6764e2b not found: ID does not exist" Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.147465 4784 scope.go:117] "RemoveContainer" containerID="bc1f7bc89c6d2c820854d355d0d8d81b2ebc0f8b022f5dbf00de0a6630342a67" Jan 23 07:17:19 crc kubenswrapper[4784]: E0123 07:17:19.148055 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc1f7bc89c6d2c820854d355d0d8d81b2ebc0f8b022f5dbf00de0a6630342a67\": container with ID starting with bc1f7bc89c6d2c820854d355d0d8d81b2ebc0f8b022f5dbf00de0a6630342a67 not found: ID does not exist" containerID="bc1f7bc89c6d2c820854d355d0d8d81b2ebc0f8b022f5dbf00de0a6630342a67" Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.148111 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc1f7bc89c6d2c820854d355d0d8d81b2ebc0f8b022f5dbf00de0a6630342a67"} err="failed to get container status \"bc1f7bc89c6d2c820854d355d0d8d81b2ebc0f8b022f5dbf00de0a6630342a67\": rpc error: code = NotFound desc = could not find container \"bc1f7bc89c6d2c820854d355d0d8d81b2ebc0f8b022f5dbf00de0a6630342a67\": container with ID starting with bc1f7bc89c6d2c820854d355d0d8d81b2ebc0f8b022f5dbf00de0a6630342a67 not found: ID does not exist" Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.148150 4784 scope.go:117] "RemoveContainer" containerID="0979bcde9f1caa0f793e8ea9bc6da01636d1521308b8877cb7761edcd24e909c" Jan 23 07:17:19 crc kubenswrapper[4784]: E0123 07:17:19.150623 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0979bcde9f1caa0f793e8ea9bc6da01636d1521308b8877cb7761edcd24e909c\": container with ID starting with 0979bcde9f1caa0f793e8ea9bc6da01636d1521308b8877cb7761edcd24e909c not found: ID does not exist" containerID="0979bcde9f1caa0f793e8ea9bc6da01636d1521308b8877cb7761edcd24e909c" Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.150664 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0979bcde9f1caa0f793e8ea9bc6da01636d1521308b8877cb7761edcd24e909c"} err="failed to get container status \"0979bcde9f1caa0f793e8ea9bc6da01636d1521308b8877cb7761edcd24e909c\": rpc error: code = NotFound desc = could not find container \"0979bcde9f1caa0f793e8ea9bc6da01636d1521308b8877cb7761edcd24e909c\": container with ID starting with 0979bcde9f1caa0f793e8ea9bc6da01636d1521308b8877cb7761edcd24e909c not found: ID does not exist" Jan 23 07:17:19 crc kubenswrapper[4784]: I0123 07:17:19.265745 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ff0623-6545-40a8-9f1c-ca7e982c7780" path="/var/lib/kubelet/pods/56ff0623-6545-40a8-9f1c-ca7e982c7780/volumes" Jan 23 07:17:58 crc kubenswrapper[4784]: I0123 07:17:58.800152 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="87ac961b-d41b-43ef-b55e-07b0cf093e56" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.185:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 07:18:23 crc kubenswrapper[4784]: I0123 07:18:23.603455 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:18:23 crc kubenswrapper[4784]: I0123 07:18:23.604439 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:18:53 crc kubenswrapper[4784]: I0123 07:18:53.603011 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:18:53 crc kubenswrapper[4784]: I0123 07:18:53.603889 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.643925 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-89rv6"] Jan 23 07:18:58 crc kubenswrapper[4784]: E0123 07:18:58.645842 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ff0623-6545-40a8-9f1c-ca7e982c7780" containerName="extract-utilities" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.645872 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ff0623-6545-40a8-9f1c-ca7e982c7780" containerName="extract-utilities" Jan 23 07:18:58 crc kubenswrapper[4784]: E0123 07:18:58.645928 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ff0623-6545-40a8-9f1c-ca7e982c7780" containerName="extract-content" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.645944 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ff0623-6545-40a8-9f1c-ca7e982c7780" containerName="extract-content" Jan 23 07:18:58 crc kubenswrapper[4784]: E0123 07:18:58.645970 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ff0623-6545-40a8-9f1c-ca7e982c7780" containerName="registry-server" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.645983 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ff0623-6545-40a8-9f1c-ca7e982c7780" containerName="registry-server" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.646377 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ff0623-6545-40a8-9f1c-ca7e982c7780" containerName="registry-server" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.652898 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.667234 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-89rv6"] Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.669050 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055e7ec1-91e2-4c95-be90-704700c0480f-catalog-content\") pod \"community-operators-89rv6\" (UID: \"055e7ec1-91e2-4c95-be90-704700c0480f\") " pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.669249 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055e7ec1-91e2-4c95-be90-704700c0480f-utilities\") pod \"community-operators-89rv6\" (UID: \"055e7ec1-91e2-4c95-be90-704700c0480f\") " pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.669298 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29dvj\" (UniqueName: \"kubernetes.io/projected/055e7ec1-91e2-4c95-be90-704700c0480f-kube-api-access-29dvj\") pod \"community-operators-89rv6\" (UID: \"055e7ec1-91e2-4c95-be90-704700c0480f\") " pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.772188 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055e7ec1-91e2-4c95-be90-704700c0480f-utilities\") pod \"community-operators-89rv6\" (UID: \"055e7ec1-91e2-4c95-be90-704700c0480f\") " pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.772515 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29dvj\" (UniqueName: \"kubernetes.io/projected/055e7ec1-91e2-4c95-be90-704700c0480f-kube-api-access-29dvj\") pod \"community-operators-89rv6\" (UID: \"055e7ec1-91e2-4c95-be90-704700c0480f\") " pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.772680 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055e7ec1-91e2-4c95-be90-704700c0480f-catalog-content\") pod \"community-operators-89rv6\" (UID: \"055e7ec1-91e2-4c95-be90-704700c0480f\") " pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.773022 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055e7ec1-91e2-4c95-be90-704700c0480f-utilities\") pod \"community-operators-89rv6\" (UID: \"055e7ec1-91e2-4c95-be90-704700c0480f\") " pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.773297 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055e7ec1-91e2-4c95-be90-704700c0480f-catalog-content\") pod \"community-operators-89rv6\" (UID: \"055e7ec1-91e2-4c95-be90-704700c0480f\") " pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.802917 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29dvj\" (UniqueName: \"kubernetes.io/projected/055e7ec1-91e2-4c95-be90-704700c0480f-kube-api-access-29dvj\") pod \"community-operators-89rv6\" (UID: \"055e7ec1-91e2-4c95-be90-704700c0480f\") " pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:18:58 crc kubenswrapper[4784]: I0123 07:18:58.985850 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:18:59 crc kubenswrapper[4784]: I0123 07:18:59.614350 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-89rv6"] Jan 23 07:19:00 crc kubenswrapper[4784]: I0123 07:19:00.337865 4784 generic.go:334] "Generic (PLEG): container finished" podID="055e7ec1-91e2-4c95-be90-704700c0480f" containerID="b6c7365c492c2c9119e74db8aa16db2360e2a3aa0054f4b81c8330e107a0a5e8" exitCode=0 Jan 23 07:19:00 crc kubenswrapper[4784]: I0123 07:19:00.337957 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89rv6" event={"ID":"055e7ec1-91e2-4c95-be90-704700c0480f","Type":"ContainerDied","Data":"b6c7365c492c2c9119e74db8aa16db2360e2a3aa0054f4b81c8330e107a0a5e8"} Jan 23 07:19:00 crc kubenswrapper[4784]: I0123 07:19:00.338500 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89rv6" event={"ID":"055e7ec1-91e2-4c95-be90-704700c0480f","Type":"ContainerStarted","Data":"31f6b2096408dd0b87a50ca9ec45c38d89dd3b272d5f847d6ca5ae53c71981f8"} Jan 23 07:19:00 crc kubenswrapper[4784]: I0123 07:19:00.341050 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:19:02 crc kubenswrapper[4784]: I0123 07:19:02.374878 4784 generic.go:334] "Generic (PLEG): container finished" podID="055e7ec1-91e2-4c95-be90-704700c0480f" containerID="bd0a4eff3a06e00750c4431a4da2ca6319ba02a2439a16fb9fe0a1ceb1730401" exitCode=0 Jan 23 07:19:02 crc kubenswrapper[4784]: I0123 07:19:02.374985 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89rv6" event={"ID":"055e7ec1-91e2-4c95-be90-704700c0480f","Type":"ContainerDied","Data":"bd0a4eff3a06e00750c4431a4da2ca6319ba02a2439a16fb9fe0a1ceb1730401"} Jan 23 07:19:04 crc kubenswrapper[4784]: I0123 07:19:04.401497 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89rv6" event={"ID":"055e7ec1-91e2-4c95-be90-704700c0480f","Type":"ContainerStarted","Data":"cfae62a0eb58a668c84d2f99371dc3e9bf25d16914b72b304b4cfb258c2e7970"} Jan 23 07:19:04 crc kubenswrapper[4784]: I0123 07:19:04.429783 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-89rv6" podStartSLOduration=2.922447166 podStartE2EDuration="6.429733436s" podCreationTimestamp="2026-01-23 07:18:58 +0000 UTC" firstStartedPulling="2026-01-23 07:19:00.34054663 +0000 UTC m=+3543.573054614" lastFinishedPulling="2026-01-23 07:19:03.84783291 +0000 UTC m=+3547.080340884" observedRunningTime="2026-01-23 07:19:04.425236845 +0000 UTC m=+3547.657744879" watchObservedRunningTime="2026-01-23 07:19:04.429733436 +0000 UTC m=+3547.662241420" Jan 23 07:19:08 crc kubenswrapper[4784]: I0123 07:19:08.985960 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:19:08 crc kubenswrapper[4784]: I0123 07:19:08.986981 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:19:09 crc kubenswrapper[4784]: I0123 07:19:09.077054 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:19:09 crc kubenswrapper[4784]: I0123 07:19:09.541352 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:19:09 crc kubenswrapper[4784]: I0123 07:19:09.596887 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-89rv6"] Jan 23 07:19:11 crc kubenswrapper[4784]: I0123 07:19:11.482543 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-89rv6" podUID="055e7ec1-91e2-4c95-be90-704700c0480f" containerName="registry-server" containerID="cri-o://cfae62a0eb58a668c84d2f99371dc3e9bf25d16914b72b304b4cfb258c2e7970" gracePeriod=2 Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.116158 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.242220 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29dvj\" (UniqueName: \"kubernetes.io/projected/055e7ec1-91e2-4c95-be90-704700c0480f-kube-api-access-29dvj\") pod \"055e7ec1-91e2-4c95-be90-704700c0480f\" (UID: \"055e7ec1-91e2-4c95-be90-704700c0480f\") " Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.242563 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055e7ec1-91e2-4c95-be90-704700c0480f-utilities\") pod \"055e7ec1-91e2-4c95-be90-704700c0480f\" (UID: \"055e7ec1-91e2-4c95-be90-704700c0480f\") " Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.242616 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055e7ec1-91e2-4c95-be90-704700c0480f-catalog-content\") pod \"055e7ec1-91e2-4c95-be90-704700c0480f\" (UID: \"055e7ec1-91e2-4c95-be90-704700c0480f\") " Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.243785 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/055e7ec1-91e2-4c95-be90-704700c0480f-utilities" (OuterVolumeSpecName: "utilities") pod "055e7ec1-91e2-4c95-be90-704700c0480f" (UID: "055e7ec1-91e2-4c95-be90-704700c0480f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.257898 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/055e7ec1-91e2-4c95-be90-704700c0480f-kube-api-access-29dvj" (OuterVolumeSpecName: "kube-api-access-29dvj") pod "055e7ec1-91e2-4c95-be90-704700c0480f" (UID: "055e7ec1-91e2-4c95-be90-704700c0480f"). InnerVolumeSpecName "kube-api-access-29dvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.311334 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/055e7ec1-91e2-4c95-be90-704700c0480f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "055e7ec1-91e2-4c95-be90-704700c0480f" (UID: "055e7ec1-91e2-4c95-be90-704700c0480f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.345013 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/055e7ec1-91e2-4c95-be90-704700c0480f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.345070 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29dvj\" (UniqueName: \"kubernetes.io/projected/055e7ec1-91e2-4c95-be90-704700c0480f-kube-api-access-29dvj\") on node \"crc\" DevicePath \"\"" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.345109 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/055e7ec1-91e2-4c95-be90-704700c0480f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.501645 4784 generic.go:334] "Generic (PLEG): container finished" podID="055e7ec1-91e2-4c95-be90-704700c0480f" containerID="cfae62a0eb58a668c84d2f99371dc3e9bf25d16914b72b304b4cfb258c2e7970" exitCode=0 Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.502211 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89rv6" event={"ID":"055e7ec1-91e2-4c95-be90-704700c0480f","Type":"ContainerDied","Data":"cfae62a0eb58a668c84d2f99371dc3e9bf25d16914b72b304b4cfb258c2e7970"} Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.502255 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89rv6" event={"ID":"055e7ec1-91e2-4c95-be90-704700c0480f","Type":"ContainerDied","Data":"31f6b2096408dd0b87a50ca9ec45c38d89dd3b272d5f847d6ca5ae53c71981f8"} Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.502279 4784 scope.go:117] "RemoveContainer" containerID="cfae62a0eb58a668c84d2f99371dc3e9bf25d16914b72b304b4cfb258c2e7970" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.502353 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-89rv6" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.546820 4784 scope.go:117] "RemoveContainer" containerID="bd0a4eff3a06e00750c4431a4da2ca6319ba02a2439a16fb9fe0a1ceb1730401" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.569041 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-89rv6"] Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.587363 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-89rv6"] Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.599113 4784 scope.go:117] "RemoveContainer" containerID="b6c7365c492c2c9119e74db8aa16db2360e2a3aa0054f4b81c8330e107a0a5e8" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.632686 4784 scope.go:117] "RemoveContainer" containerID="cfae62a0eb58a668c84d2f99371dc3e9bf25d16914b72b304b4cfb258c2e7970" Jan 23 07:19:12 crc kubenswrapper[4784]: E0123 07:19:12.633968 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfae62a0eb58a668c84d2f99371dc3e9bf25d16914b72b304b4cfb258c2e7970\": container with ID starting with cfae62a0eb58a668c84d2f99371dc3e9bf25d16914b72b304b4cfb258c2e7970 not found: ID does not exist" containerID="cfae62a0eb58a668c84d2f99371dc3e9bf25d16914b72b304b4cfb258c2e7970" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.634081 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfae62a0eb58a668c84d2f99371dc3e9bf25d16914b72b304b4cfb258c2e7970"} err="failed to get container status \"cfae62a0eb58a668c84d2f99371dc3e9bf25d16914b72b304b4cfb258c2e7970\": rpc error: code = NotFound desc = could not find container \"cfae62a0eb58a668c84d2f99371dc3e9bf25d16914b72b304b4cfb258c2e7970\": container with ID starting with cfae62a0eb58a668c84d2f99371dc3e9bf25d16914b72b304b4cfb258c2e7970 not found: ID does not exist" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.634175 4784 scope.go:117] "RemoveContainer" containerID="bd0a4eff3a06e00750c4431a4da2ca6319ba02a2439a16fb9fe0a1ceb1730401" Jan 23 07:19:12 crc kubenswrapper[4784]: E0123 07:19:12.634742 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd0a4eff3a06e00750c4431a4da2ca6319ba02a2439a16fb9fe0a1ceb1730401\": container with ID starting with bd0a4eff3a06e00750c4431a4da2ca6319ba02a2439a16fb9fe0a1ceb1730401 not found: ID does not exist" containerID="bd0a4eff3a06e00750c4431a4da2ca6319ba02a2439a16fb9fe0a1ceb1730401" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.634856 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0a4eff3a06e00750c4431a4da2ca6319ba02a2439a16fb9fe0a1ceb1730401"} err="failed to get container status \"bd0a4eff3a06e00750c4431a4da2ca6319ba02a2439a16fb9fe0a1ceb1730401\": rpc error: code = NotFound desc = could not find container \"bd0a4eff3a06e00750c4431a4da2ca6319ba02a2439a16fb9fe0a1ceb1730401\": container with ID starting with bd0a4eff3a06e00750c4431a4da2ca6319ba02a2439a16fb9fe0a1ceb1730401 not found: ID does not exist" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.634922 4784 scope.go:117] "RemoveContainer" containerID="b6c7365c492c2c9119e74db8aa16db2360e2a3aa0054f4b81c8330e107a0a5e8" Jan 23 07:19:12 crc kubenswrapper[4784]: E0123 07:19:12.635456 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6c7365c492c2c9119e74db8aa16db2360e2a3aa0054f4b81c8330e107a0a5e8\": container with ID starting with b6c7365c492c2c9119e74db8aa16db2360e2a3aa0054f4b81c8330e107a0a5e8 not found: ID does not exist" containerID="b6c7365c492c2c9119e74db8aa16db2360e2a3aa0054f4b81c8330e107a0a5e8" Jan 23 07:19:12 crc kubenswrapper[4784]: I0123 07:19:12.635531 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6c7365c492c2c9119e74db8aa16db2360e2a3aa0054f4b81c8330e107a0a5e8"} err="failed to get container status \"b6c7365c492c2c9119e74db8aa16db2360e2a3aa0054f4b81c8330e107a0a5e8\": rpc error: code = NotFound desc = could not find container \"b6c7365c492c2c9119e74db8aa16db2360e2a3aa0054f4b81c8330e107a0a5e8\": container with ID starting with b6c7365c492c2c9119e74db8aa16db2360e2a3aa0054f4b81c8330e107a0a5e8 not found: ID does not exist" Jan 23 07:19:13 crc kubenswrapper[4784]: I0123 07:19:13.265296 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="055e7ec1-91e2-4c95-be90-704700c0480f" path="/var/lib/kubelet/pods/055e7ec1-91e2-4c95-be90-704700c0480f/volumes" Jan 23 07:19:23 crc kubenswrapper[4784]: I0123 07:19:23.604083 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:19:23 crc kubenswrapper[4784]: I0123 07:19:23.605191 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:19:23 crc kubenswrapper[4784]: I0123 07:19:23.605275 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 07:19:23 crc kubenswrapper[4784]: I0123 07:19:23.606439 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d338b49f23099312d309fec3a4ee34e5faf06f7670aa1a5dfe8475370de40deb"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:19:23 crc kubenswrapper[4784]: I0123 07:19:23.606552 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://d338b49f23099312d309fec3a4ee34e5faf06f7670aa1a5dfe8475370de40deb" gracePeriod=600 Jan 23 07:19:24 crc kubenswrapper[4784]: I0123 07:19:24.651948 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="d338b49f23099312d309fec3a4ee34e5faf06f7670aa1a5dfe8475370de40deb" exitCode=0 Jan 23 07:19:24 crc kubenswrapper[4784]: I0123 07:19:24.652212 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"d338b49f23099312d309fec3a4ee34e5faf06f7670aa1a5dfe8475370de40deb"} Jan 23 07:19:24 crc kubenswrapper[4784]: I0123 07:19:24.653643 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec"} Jan 23 07:19:24 crc kubenswrapper[4784]: I0123 07:19:24.653725 4784 scope.go:117] "RemoveContainer" containerID="83b3053426db572164e4af193a1a523fcec43c4659989c2af99bc9354b337c83" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.335876 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mpx8k"] Jan 23 07:20:17 crc kubenswrapper[4784]: E0123 07:20:17.337057 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055e7ec1-91e2-4c95-be90-704700c0480f" containerName="extract-content" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.337075 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="055e7ec1-91e2-4c95-be90-704700c0480f" containerName="extract-content" Jan 23 07:20:17 crc kubenswrapper[4784]: E0123 07:20:17.337101 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055e7ec1-91e2-4c95-be90-704700c0480f" containerName="extract-utilities" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.337109 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="055e7ec1-91e2-4c95-be90-704700c0480f" containerName="extract-utilities" Jan 23 07:20:17 crc kubenswrapper[4784]: E0123 07:20:17.337135 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055e7ec1-91e2-4c95-be90-704700c0480f" containerName="registry-server" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.337142 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="055e7ec1-91e2-4c95-be90-704700c0480f" containerName="registry-server" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.337412 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="055e7ec1-91e2-4c95-be90-704700c0480f" containerName="registry-server" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.339373 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.343376 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mpx8k"] Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.407872 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df6f4029-51e5-4394-88ac-979a6f58f44f-catalog-content\") pod \"redhat-operators-mpx8k\" (UID: \"df6f4029-51e5-4394-88ac-979a6f58f44f\") " pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.408030 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr5t6\" (UniqueName: \"kubernetes.io/projected/df6f4029-51e5-4394-88ac-979a6f58f44f-kube-api-access-mr5t6\") pod \"redhat-operators-mpx8k\" (UID: \"df6f4029-51e5-4394-88ac-979a6f58f44f\") " pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.408102 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df6f4029-51e5-4394-88ac-979a6f58f44f-utilities\") pod \"redhat-operators-mpx8k\" (UID: \"df6f4029-51e5-4394-88ac-979a6f58f44f\") " pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.510086 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr5t6\" (UniqueName: \"kubernetes.io/projected/df6f4029-51e5-4394-88ac-979a6f58f44f-kube-api-access-mr5t6\") pod \"redhat-operators-mpx8k\" (UID: \"df6f4029-51e5-4394-88ac-979a6f58f44f\") " pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.510197 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df6f4029-51e5-4394-88ac-979a6f58f44f-utilities\") pod \"redhat-operators-mpx8k\" (UID: \"df6f4029-51e5-4394-88ac-979a6f58f44f\") " pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.510268 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df6f4029-51e5-4394-88ac-979a6f58f44f-catalog-content\") pod \"redhat-operators-mpx8k\" (UID: \"df6f4029-51e5-4394-88ac-979a6f58f44f\") " pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.510963 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df6f4029-51e5-4394-88ac-979a6f58f44f-catalog-content\") pod \"redhat-operators-mpx8k\" (UID: \"df6f4029-51e5-4394-88ac-979a6f58f44f\") " pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.512080 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df6f4029-51e5-4394-88ac-979a6f58f44f-utilities\") pod \"redhat-operators-mpx8k\" (UID: \"df6f4029-51e5-4394-88ac-979a6f58f44f\") " pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.550488 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr5t6\" (UniqueName: \"kubernetes.io/projected/df6f4029-51e5-4394-88ac-979a6f58f44f-kube-api-access-mr5t6\") pod \"redhat-operators-mpx8k\" (UID: \"df6f4029-51e5-4394-88ac-979a6f58f44f\") " pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:17 crc kubenswrapper[4784]: I0123 07:20:17.687243 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:18 crc kubenswrapper[4784]: I0123 07:20:18.247447 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mpx8k"] Jan 23 07:20:19 crc kubenswrapper[4784]: I0123 07:20:19.268400 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpx8k" event={"ID":"df6f4029-51e5-4394-88ac-979a6f58f44f","Type":"ContainerStarted","Data":"b1be07727d2531490fcd31f8a9be1cb47552dcfced2b924b7fdf16bf7393ace4"} Jan 23 07:20:20 crc kubenswrapper[4784]: I0123 07:20:20.282612 4784 generic.go:334] "Generic (PLEG): container finished" podID="df6f4029-51e5-4394-88ac-979a6f58f44f" containerID="3a11c4b040512a94572da93af0072e560a5ad89f95d781655178fd928e7d6584" exitCode=0 Jan 23 07:20:20 crc kubenswrapper[4784]: I0123 07:20:20.282738 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpx8k" event={"ID":"df6f4029-51e5-4394-88ac-979a6f58f44f","Type":"ContainerDied","Data":"3a11c4b040512a94572da93af0072e560a5ad89f95d781655178fd928e7d6584"} Jan 23 07:20:22 crc kubenswrapper[4784]: I0123 07:20:22.318722 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpx8k" event={"ID":"df6f4029-51e5-4394-88ac-979a6f58f44f","Type":"ContainerStarted","Data":"6a9497a3ba0d05a5b6de21bef15558b6a551125e4cd0d16ac5eefae20de7fa84"} Jan 23 07:20:23 crc kubenswrapper[4784]: I0123 07:20:23.337362 4784 generic.go:334] "Generic (PLEG): container finished" podID="df6f4029-51e5-4394-88ac-979a6f58f44f" containerID="6a9497a3ba0d05a5b6de21bef15558b6a551125e4cd0d16ac5eefae20de7fa84" exitCode=0 Jan 23 07:20:23 crc kubenswrapper[4784]: I0123 07:20:23.337433 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpx8k" event={"ID":"df6f4029-51e5-4394-88ac-979a6f58f44f","Type":"ContainerDied","Data":"6a9497a3ba0d05a5b6de21bef15558b6a551125e4cd0d16ac5eefae20de7fa84"} Jan 23 07:20:31 crc kubenswrapper[4784]: I0123 07:20:31.442184 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpx8k" event={"ID":"df6f4029-51e5-4394-88ac-979a6f58f44f","Type":"ContainerStarted","Data":"e06fcb6077360a913d137c5a9c272628e90fb3e487e3c9cc85cad39ddd3e45ae"} Jan 23 07:20:31 crc kubenswrapper[4784]: I0123 07:20:31.474315 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mpx8k" podStartSLOduration=4.430818555 podStartE2EDuration="14.474292996s" podCreationTimestamp="2026-01-23 07:20:17 +0000 UTC" firstStartedPulling="2026-01-23 07:20:20.284837742 +0000 UTC m=+3623.517345726" lastFinishedPulling="2026-01-23 07:20:30.328312153 +0000 UTC m=+3633.560820167" observedRunningTime="2026-01-23 07:20:31.471311002 +0000 UTC m=+3634.703818996" watchObservedRunningTime="2026-01-23 07:20:31.474292996 +0000 UTC m=+3634.706800970" Jan 23 07:20:37 crc kubenswrapper[4784]: I0123 07:20:37.688219 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:37 crc kubenswrapper[4784]: I0123 07:20:37.688979 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:37 crc kubenswrapper[4784]: I0123 07:20:37.750090 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:38 crc kubenswrapper[4784]: I0123 07:20:38.606931 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:38 crc kubenswrapper[4784]: I0123 07:20:38.681199 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mpx8k"] Jan 23 07:20:40 crc kubenswrapper[4784]: I0123 07:20:40.556603 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mpx8k" podUID="df6f4029-51e5-4394-88ac-979a6f58f44f" containerName="registry-server" containerID="cri-o://e06fcb6077360a913d137c5a9c272628e90fb3e487e3c9cc85cad39ddd3e45ae" gracePeriod=2 Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.296439 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.449384 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df6f4029-51e5-4394-88ac-979a6f58f44f-catalog-content\") pod \"df6f4029-51e5-4394-88ac-979a6f58f44f\" (UID: \"df6f4029-51e5-4394-88ac-979a6f58f44f\") " Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.449521 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df6f4029-51e5-4394-88ac-979a6f58f44f-utilities\") pod \"df6f4029-51e5-4394-88ac-979a6f58f44f\" (UID: \"df6f4029-51e5-4394-88ac-979a6f58f44f\") " Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.449618 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr5t6\" (UniqueName: \"kubernetes.io/projected/df6f4029-51e5-4394-88ac-979a6f58f44f-kube-api-access-mr5t6\") pod \"df6f4029-51e5-4394-88ac-979a6f58f44f\" (UID: \"df6f4029-51e5-4394-88ac-979a6f58f44f\") " Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.451110 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df6f4029-51e5-4394-88ac-979a6f58f44f-utilities" (OuterVolumeSpecName: "utilities") pod "df6f4029-51e5-4394-88ac-979a6f58f44f" (UID: "df6f4029-51e5-4394-88ac-979a6f58f44f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.464156 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6f4029-51e5-4394-88ac-979a6f58f44f-kube-api-access-mr5t6" (OuterVolumeSpecName: "kube-api-access-mr5t6") pod "df6f4029-51e5-4394-88ac-979a6f58f44f" (UID: "df6f4029-51e5-4394-88ac-979a6f58f44f"). InnerVolumeSpecName "kube-api-access-mr5t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.553302 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df6f4029-51e5-4394-88ac-979a6f58f44f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.553343 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr5t6\" (UniqueName: \"kubernetes.io/projected/df6f4029-51e5-4394-88ac-979a6f58f44f-kube-api-access-mr5t6\") on node \"crc\" DevicePath \"\"" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.581876 4784 generic.go:334] "Generic (PLEG): container finished" podID="df6f4029-51e5-4394-88ac-979a6f58f44f" containerID="e06fcb6077360a913d137c5a9c272628e90fb3e487e3c9cc85cad39ddd3e45ae" exitCode=0 Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.581942 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpx8k" event={"ID":"df6f4029-51e5-4394-88ac-979a6f58f44f","Type":"ContainerDied","Data":"e06fcb6077360a913d137c5a9c272628e90fb3e487e3c9cc85cad39ddd3e45ae"} Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.581987 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpx8k" event={"ID":"df6f4029-51e5-4394-88ac-979a6f58f44f","Type":"ContainerDied","Data":"b1be07727d2531490fcd31f8a9be1cb47552dcfced2b924b7fdf16bf7393ace4"} Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.582035 4784 scope.go:117] "RemoveContainer" containerID="e06fcb6077360a913d137c5a9c272628e90fb3e487e3c9cc85cad39ddd3e45ae" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.582583 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpx8k" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.598234 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df6f4029-51e5-4394-88ac-979a6f58f44f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df6f4029-51e5-4394-88ac-979a6f58f44f" (UID: "df6f4029-51e5-4394-88ac-979a6f58f44f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.614577 4784 scope.go:117] "RemoveContainer" containerID="6a9497a3ba0d05a5b6de21bef15558b6a551125e4cd0d16ac5eefae20de7fa84" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.649959 4784 scope.go:117] "RemoveContainer" containerID="3a11c4b040512a94572da93af0072e560a5ad89f95d781655178fd928e7d6584" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.655865 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df6f4029-51e5-4394-88ac-979a6f58f44f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.697495 4784 scope.go:117] "RemoveContainer" containerID="e06fcb6077360a913d137c5a9c272628e90fb3e487e3c9cc85cad39ddd3e45ae" Jan 23 07:20:42 crc kubenswrapper[4784]: E0123 07:20:42.698287 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e06fcb6077360a913d137c5a9c272628e90fb3e487e3c9cc85cad39ddd3e45ae\": container with ID starting with e06fcb6077360a913d137c5a9c272628e90fb3e487e3c9cc85cad39ddd3e45ae not found: ID does not exist" containerID="e06fcb6077360a913d137c5a9c272628e90fb3e487e3c9cc85cad39ddd3e45ae" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.698355 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e06fcb6077360a913d137c5a9c272628e90fb3e487e3c9cc85cad39ddd3e45ae"} err="failed to get container status \"e06fcb6077360a913d137c5a9c272628e90fb3e487e3c9cc85cad39ddd3e45ae\": rpc error: code = NotFound desc = could not find container \"e06fcb6077360a913d137c5a9c272628e90fb3e487e3c9cc85cad39ddd3e45ae\": container with ID starting with e06fcb6077360a913d137c5a9c272628e90fb3e487e3c9cc85cad39ddd3e45ae not found: ID does not exist" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.698407 4784 scope.go:117] "RemoveContainer" containerID="6a9497a3ba0d05a5b6de21bef15558b6a551125e4cd0d16ac5eefae20de7fa84" Jan 23 07:20:42 crc kubenswrapper[4784]: E0123 07:20:42.698840 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a9497a3ba0d05a5b6de21bef15558b6a551125e4cd0d16ac5eefae20de7fa84\": container with ID starting with 6a9497a3ba0d05a5b6de21bef15558b6a551125e4cd0d16ac5eefae20de7fa84 not found: ID does not exist" containerID="6a9497a3ba0d05a5b6de21bef15558b6a551125e4cd0d16ac5eefae20de7fa84" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.698892 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a9497a3ba0d05a5b6de21bef15558b6a551125e4cd0d16ac5eefae20de7fa84"} err="failed to get container status \"6a9497a3ba0d05a5b6de21bef15558b6a551125e4cd0d16ac5eefae20de7fa84\": rpc error: code = NotFound desc = could not find container \"6a9497a3ba0d05a5b6de21bef15558b6a551125e4cd0d16ac5eefae20de7fa84\": container with ID starting with 6a9497a3ba0d05a5b6de21bef15558b6a551125e4cd0d16ac5eefae20de7fa84 not found: ID does not exist" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.698927 4784 scope.go:117] "RemoveContainer" containerID="3a11c4b040512a94572da93af0072e560a5ad89f95d781655178fd928e7d6584" Jan 23 07:20:42 crc kubenswrapper[4784]: E0123 07:20:42.699185 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a11c4b040512a94572da93af0072e560a5ad89f95d781655178fd928e7d6584\": container with ID starting with 3a11c4b040512a94572da93af0072e560a5ad89f95d781655178fd928e7d6584 not found: ID does not exist" containerID="3a11c4b040512a94572da93af0072e560a5ad89f95d781655178fd928e7d6584" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.699212 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a11c4b040512a94572da93af0072e560a5ad89f95d781655178fd928e7d6584"} err="failed to get container status \"3a11c4b040512a94572da93af0072e560a5ad89f95d781655178fd928e7d6584\": rpc error: code = NotFound desc = could not find container \"3a11c4b040512a94572da93af0072e560a5ad89f95d781655178fd928e7d6584\": container with ID starting with 3a11c4b040512a94572da93af0072e560a5ad89f95d781655178fd928e7d6584 not found: ID does not exist" Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.935746 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mpx8k"] Jan 23 07:20:42 crc kubenswrapper[4784]: I0123 07:20:42.946554 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mpx8k"] Jan 23 07:20:43 crc kubenswrapper[4784]: I0123 07:20:43.272396 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df6f4029-51e5-4394-88ac-979a6f58f44f" path="/var/lib/kubelet/pods/df6f4029-51e5-4394-88ac-979a6f58f44f/volumes" Jan 23 07:21:23 crc kubenswrapper[4784]: I0123 07:21:23.602866 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:21:23 crc kubenswrapper[4784]: I0123 07:21:23.603629 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:21:53 crc kubenswrapper[4784]: I0123 07:21:53.603509 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:21:53 crc kubenswrapper[4784]: I0123 07:21:53.604459 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:22:23 crc kubenswrapper[4784]: I0123 07:22:23.603365 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:22:23 crc kubenswrapper[4784]: I0123 07:22:23.604378 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:22:23 crc kubenswrapper[4784]: I0123 07:22:23.604451 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 07:22:23 crc kubenswrapper[4784]: I0123 07:22:23.605616 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:22:23 crc kubenswrapper[4784]: I0123 07:22:23.605702 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" gracePeriod=600 Jan 23 07:22:23 crc kubenswrapper[4784]: I0123 07:22:23.858355 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" exitCode=0 Jan 23 07:22:23 crc kubenswrapper[4784]: I0123 07:22:23.858405 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec"} Jan 23 07:22:23 crc kubenswrapper[4784]: I0123 07:22:23.858444 4784 scope.go:117] "RemoveContainer" containerID="d338b49f23099312d309fec3a4ee34e5faf06f7670aa1a5dfe8475370de40deb" Jan 23 07:22:24 crc kubenswrapper[4784]: E0123 07:22:24.279625 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:22:24 crc kubenswrapper[4784]: I0123 07:22:24.871661 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:22:24 crc kubenswrapper[4784]: E0123 07:22:24.872318 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:22:36 crc kubenswrapper[4784]: I0123 07:22:36.254640 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:22:36 crc kubenswrapper[4784]: E0123 07:22:36.255817 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:22:50 crc kubenswrapper[4784]: I0123 07:22:50.255220 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:22:50 crc kubenswrapper[4784]: E0123 07:22:50.257058 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:23:02 crc kubenswrapper[4784]: I0123 07:23:02.254009 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:23:02 crc kubenswrapper[4784]: E0123 07:23:02.255340 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:23:13 crc kubenswrapper[4784]: I0123 07:23:13.254571 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:23:13 crc kubenswrapper[4784]: E0123 07:23:13.255598 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:23:25 crc kubenswrapper[4784]: I0123 07:23:25.254437 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:23:25 crc kubenswrapper[4784]: E0123 07:23:25.256432 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:23:39 crc kubenswrapper[4784]: I0123 07:23:39.255087 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:23:39 crc kubenswrapper[4784]: E0123 07:23:39.256704 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:23:54 crc kubenswrapper[4784]: I0123 07:23:54.253600 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:23:54 crc kubenswrapper[4784]: E0123 07:23:54.256070 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:24:08 crc kubenswrapper[4784]: I0123 07:24:08.253867 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:24:08 crc kubenswrapper[4784]: E0123 07:24:08.255235 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:24:23 crc kubenswrapper[4784]: I0123 07:24:23.254467 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:24:23 crc kubenswrapper[4784]: E0123 07:24:23.255699 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:24:37 crc kubenswrapper[4784]: I0123 07:24:37.268357 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:24:37 crc kubenswrapper[4784]: E0123 07:24:37.270029 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:24:49 crc kubenswrapper[4784]: I0123 07:24:49.254505 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:24:49 crc kubenswrapper[4784]: E0123 07:24:49.255599 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:25:00 crc kubenswrapper[4784]: I0123 07:25:00.253543 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:25:00 crc kubenswrapper[4784]: E0123 07:25:00.254495 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:25:14 crc kubenswrapper[4784]: I0123 07:25:14.254220 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:25:14 crc kubenswrapper[4784]: E0123 07:25:14.255801 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:25:26 crc kubenswrapper[4784]: I0123 07:25:26.255085 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:25:26 crc kubenswrapper[4784]: E0123 07:25:26.256485 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:25:39 crc kubenswrapper[4784]: I0123 07:25:39.254197 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:25:39 crc kubenswrapper[4784]: E0123 07:25:39.255463 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:25:51 crc kubenswrapper[4784]: I0123 07:25:51.255031 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:25:51 crc kubenswrapper[4784]: E0123 07:25:51.256224 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:26:03 crc kubenswrapper[4784]: I0123 07:26:03.254687 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:26:03 crc kubenswrapper[4784]: E0123 07:26:03.256078 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:26:15 crc kubenswrapper[4784]: I0123 07:26:15.254409 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:26:15 crc kubenswrapper[4784]: E0123 07:26:15.255881 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:26:30 crc kubenswrapper[4784]: I0123 07:26:30.254211 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:26:30 crc kubenswrapper[4784]: E0123 07:26:30.255312 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:26:45 crc kubenswrapper[4784]: I0123 07:26:45.254311 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:26:45 crc kubenswrapper[4784]: E0123 07:26:45.255333 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:27:00 crc kubenswrapper[4784]: I0123 07:27:00.254311 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:27:00 crc kubenswrapper[4784]: E0123 07:27:00.256399 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:27:13 crc kubenswrapper[4784]: I0123 07:27:13.254398 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:27:13 crc kubenswrapper[4784]: E0123 07:27:13.255570 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.588772 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ptqlr"] Jan 23 07:27:26 crc kubenswrapper[4784]: E0123 07:27:26.590340 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6f4029-51e5-4394-88ac-979a6f58f44f" containerName="registry-server" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.590370 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6f4029-51e5-4394-88ac-979a6f58f44f" containerName="registry-server" Jan 23 07:27:26 crc kubenswrapper[4784]: E0123 07:27:26.590416 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6f4029-51e5-4394-88ac-979a6f58f44f" containerName="extract-content" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.590425 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6f4029-51e5-4394-88ac-979a6f58f44f" containerName="extract-content" Jan 23 07:27:26 crc kubenswrapper[4784]: E0123 07:27:26.590461 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6f4029-51e5-4394-88ac-979a6f58f44f" containerName="extract-utilities" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.590468 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6f4029-51e5-4394-88ac-979a6f58f44f" containerName="extract-utilities" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.590763 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6f4029-51e5-4394-88ac-979a6f58f44f" containerName="registry-server" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.592931 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.595575 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptqlr"] Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.722645 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf7694a7-7574-43c4-9e99-2216e42abe77-catalog-content\") pod \"redhat-marketplace-ptqlr\" (UID: \"cf7694a7-7574-43c4-9e99-2216e42abe77\") " pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.722746 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf7694a7-7574-43c4-9e99-2216e42abe77-utilities\") pod \"redhat-marketplace-ptqlr\" (UID: \"cf7694a7-7574-43c4-9e99-2216e42abe77\") " pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.723008 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmzz4\" (UniqueName: \"kubernetes.io/projected/cf7694a7-7574-43c4-9e99-2216e42abe77-kube-api-access-kmzz4\") pod \"redhat-marketplace-ptqlr\" (UID: \"cf7694a7-7574-43c4-9e99-2216e42abe77\") " pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.825336 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf7694a7-7574-43c4-9e99-2216e42abe77-catalog-content\") pod \"redhat-marketplace-ptqlr\" (UID: \"cf7694a7-7574-43c4-9e99-2216e42abe77\") " pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.825433 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf7694a7-7574-43c4-9e99-2216e42abe77-utilities\") pod \"redhat-marketplace-ptqlr\" (UID: \"cf7694a7-7574-43c4-9e99-2216e42abe77\") " pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.825481 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmzz4\" (UniqueName: \"kubernetes.io/projected/cf7694a7-7574-43c4-9e99-2216e42abe77-kube-api-access-kmzz4\") pod \"redhat-marketplace-ptqlr\" (UID: \"cf7694a7-7574-43c4-9e99-2216e42abe77\") " pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.825962 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf7694a7-7574-43c4-9e99-2216e42abe77-catalog-content\") pod \"redhat-marketplace-ptqlr\" (UID: \"cf7694a7-7574-43c4-9e99-2216e42abe77\") " pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.825977 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf7694a7-7574-43c4-9e99-2216e42abe77-utilities\") pod \"redhat-marketplace-ptqlr\" (UID: \"cf7694a7-7574-43c4-9e99-2216e42abe77\") " pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.846826 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmzz4\" (UniqueName: \"kubernetes.io/projected/cf7694a7-7574-43c4-9e99-2216e42abe77-kube-api-access-kmzz4\") pod \"redhat-marketplace-ptqlr\" (UID: \"cf7694a7-7574-43c4-9e99-2216e42abe77\") " pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:26 crc kubenswrapper[4784]: I0123 07:27:26.950643 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:27 crc kubenswrapper[4784]: I0123 07:27:27.263014 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:27:27 crc kubenswrapper[4784]: I0123 07:27:27.446682 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptqlr"] Jan 23 07:27:27 crc kubenswrapper[4784]: I0123 07:27:27.639858 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"994d3bcdb549373bc6598290555b55b409dfcc798f022bdc875fe89efe149218"} Jan 23 07:27:27 crc kubenswrapper[4784]: I0123 07:27:27.642826 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptqlr" event={"ID":"cf7694a7-7574-43c4-9e99-2216e42abe77","Type":"ContainerStarted","Data":"fd03587a68e1d474835023f58ac71988bd247e169a51907198226b6a9cfe6312"} Jan 23 07:27:28 crc kubenswrapper[4784]: I0123 07:27:28.656356 4784 generic.go:334] "Generic (PLEG): container finished" podID="cf7694a7-7574-43c4-9e99-2216e42abe77" containerID="b7a7a06030f941b464938cb42c4067368a8a1e0cf7dc1c00d17b25922b83781b" exitCode=0 Jan 23 07:27:28 crc kubenswrapper[4784]: I0123 07:27:28.656423 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptqlr" event={"ID":"cf7694a7-7574-43c4-9e99-2216e42abe77","Type":"ContainerDied","Data":"b7a7a06030f941b464938cb42c4067368a8a1e0cf7dc1c00d17b25922b83781b"} Jan 23 07:27:28 crc kubenswrapper[4784]: I0123 07:27:28.660500 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:27:30 crc kubenswrapper[4784]: I0123 07:27:30.684426 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptqlr" event={"ID":"cf7694a7-7574-43c4-9e99-2216e42abe77","Type":"ContainerStarted","Data":"3884d6100f27b63e300a335dedfac5bfec7823f7eca07ab82b376a52717a84d2"} Jan 23 07:27:31 crc kubenswrapper[4784]: I0123 07:27:31.696970 4784 generic.go:334] "Generic (PLEG): container finished" podID="cf7694a7-7574-43c4-9e99-2216e42abe77" containerID="3884d6100f27b63e300a335dedfac5bfec7823f7eca07ab82b376a52717a84d2" exitCode=0 Jan 23 07:27:31 crc kubenswrapper[4784]: I0123 07:27:31.697056 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptqlr" event={"ID":"cf7694a7-7574-43c4-9e99-2216e42abe77","Type":"ContainerDied","Data":"3884d6100f27b63e300a335dedfac5bfec7823f7eca07ab82b376a52717a84d2"} Jan 23 07:27:33 crc kubenswrapper[4784]: I0123 07:27:33.721574 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptqlr" event={"ID":"cf7694a7-7574-43c4-9e99-2216e42abe77","Type":"ContainerStarted","Data":"7da2b804ba3827008e9acce8dcdb8c097dfeb12298dbd6d487a2abf80b535b04"} Jan 23 07:27:33 crc kubenswrapper[4784]: I0123 07:27:33.746445 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ptqlr" podStartSLOduration=3.474798474 podStartE2EDuration="7.746424218s" podCreationTimestamp="2026-01-23 07:27:26 +0000 UTC" firstStartedPulling="2026-01-23 07:27:28.660217655 +0000 UTC m=+4051.892725629" lastFinishedPulling="2026-01-23 07:27:32.931843359 +0000 UTC m=+4056.164351373" observedRunningTime="2026-01-23 07:27:33.738007401 +0000 UTC m=+4056.970515375" watchObservedRunningTime="2026-01-23 07:27:33.746424218 +0000 UTC m=+4056.978932192" Jan 23 07:27:36 crc kubenswrapper[4784]: I0123 07:27:36.951288 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:36 crc kubenswrapper[4784]: I0123 07:27:36.953928 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:37 crc kubenswrapper[4784]: I0123 07:27:37.013548 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:38 crc kubenswrapper[4784]: I0123 07:27:38.820174 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:38 crc kubenswrapper[4784]: I0123 07:27:38.880786 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptqlr"] Jan 23 07:27:40 crc kubenswrapper[4784]: I0123 07:27:40.786230 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ptqlr" podUID="cf7694a7-7574-43c4-9e99-2216e42abe77" containerName="registry-server" containerID="cri-o://7da2b804ba3827008e9acce8dcdb8c097dfeb12298dbd6d487a2abf80b535b04" gracePeriod=2 Jan 23 07:27:41 crc kubenswrapper[4784]: I0123 07:27:41.802824 4784 generic.go:334] "Generic (PLEG): container finished" podID="cf7694a7-7574-43c4-9e99-2216e42abe77" containerID="7da2b804ba3827008e9acce8dcdb8c097dfeb12298dbd6d487a2abf80b535b04" exitCode=0 Jan 23 07:27:41 crc kubenswrapper[4784]: I0123 07:27:41.802933 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptqlr" event={"ID":"cf7694a7-7574-43c4-9e99-2216e42abe77","Type":"ContainerDied","Data":"7da2b804ba3827008e9acce8dcdb8c097dfeb12298dbd6d487a2abf80b535b04"} Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.144737 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.306604 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf7694a7-7574-43c4-9e99-2216e42abe77-catalog-content\") pod \"cf7694a7-7574-43c4-9e99-2216e42abe77\" (UID: \"cf7694a7-7574-43c4-9e99-2216e42abe77\") " Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.306993 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmzz4\" (UniqueName: \"kubernetes.io/projected/cf7694a7-7574-43c4-9e99-2216e42abe77-kube-api-access-kmzz4\") pod \"cf7694a7-7574-43c4-9e99-2216e42abe77\" (UID: \"cf7694a7-7574-43c4-9e99-2216e42abe77\") " Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.307034 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf7694a7-7574-43c4-9e99-2216e42abe77-utilities\") pod \"cf7694a7-7574-43c4-9e99-2216e42abe77\" (UID: \"cf7694a7-7574-43c4-9e99-2216e42abe77\") " Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.308072 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf7694a7-7574-43c4-9e99-2216e42abe77-utilities" (OuterVolumeSpecName: "utilities") pod "cf7694a7-7574-43c4-9e99-2216e42abe77" (UID: "cf7694a7-7574-43c4-9e99-2216e42abe77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.313740 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf7694a7-7574-43c4-9e99-2216e42abe77-kube-api-access-kmzz4" (OuterVolumeSpecName: "kube-api-access-kmzz4") pod "cf7694a7-7574-43c4-9e99-2216e42abe77" (UID: "cf7694a7-7574-43c4-9e99-2216e42abe77"). InnerVolumeSpecName "kube-api-access-kmzz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.336487 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf7694a7-7574-43c4-9e99-2216e42abe77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf7694a7-7574-43c4-9e99-2216e42abe77" (UID: "cf7694a7-7574-43c4-9e99-2216e42abe77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.410444 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmzz4\" (UniqueName: \"kubernetes.io/projected/cf7694a7-7574-43c4-9e99-2216e42abe77-kube-api-access-kmzz4\") on node \"crc\" DevicePath \"\"" Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.410484 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf7694a7-7574-43c4-9e99-2216e42abe77-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.410494 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf7694a7-7574-43c4-9e99-2216e42abe77-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.860028 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptqlr" event={"ID":"cf7694a7-7574-43c4-9e99-2216e42abe77","Type":"ContainerDied","Data":"fd03587a68e1d474835023f58ac71988bd247e169a51907198226b6a9cfe6312"} Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.860096 4784 scope.go:117] "RemoveContainer" containerID="7da2b804ba3827008e9acce8dcdb8c097dfeb12298dbd6d487a2abf80b535b04" Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.860270 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptqlr" Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.918006 4784 scope.go:117] "RemoveContainer" containerID="3884d6100f27b63e300a335dedfac5bfec7823f7eca07ab82b376a52717a84d2" Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.933147 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptqlr"] Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.945508 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptqlr"] Jan 23 07:27:42 crc kubenswrapper[4784]: I0123 07:27:42.948394 4784 scope.go:117] "RemoveContainer" containerID="b7a7a06030f941b464938cb42c4067368a8a1e0cf7dc1c00d17b25922b83781b" Jan 23 07:27:43 crc kubenswrapper[4784]: I0123 07:27:43.270223 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf7694a7-7574-43c4-9e99-2216e42abe77" path="/var/lib/kubelet/pods/cf7694a7-7574-43c4-9e99-2216e42abe77/volumes" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.183402 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8df9f"] Jan 23 07:28:22 crc kubenswrapper[4784]: E0123 07:28:22.185035 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf7694a7-7574-43c4-9e99-2216e42abe77" containerName="extract-utilities" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.185059 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf7694a7-7574-43c4-9e99-2216e42abe77" containerName="extract-utilities" Jan 23 07:28:22 crc kubenswrapper[4784]: E0123 07:28:22.185093 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf7694a7-7574-43c4-9e99-2216e42abe77" containerName="registry-server" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.185102 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf7694a7-7574-43c4-9e99-2216e42abe77" containerName="registry-server" Jan 23 07:28:22 crc kubenswrapper[4784]: E0123 07:28:22.185118 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf7694a7-7574-43c4-9e99-2216e42abe77" containerName="extract-content" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.185128 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf7694a7-7574-43c4-9e99-2216e42abe77" containerName="extract-content" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.185427 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf7694a7-7574-43c4-9e99-2216e42abe77" containerName="registry-server" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.187526 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.227893 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8df9f"] Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.340107 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-catalog-content\") pod \"certified-operators-8df9f\" (UID: \"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f\") " pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.340226 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwrks\" (UniqueName: \"kubernetes.io/projected/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-kube-api-access-nwrks\") pod \"certified-operators-8df9f\" (UID: \"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f\") " pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.340276 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-utilities\") pod \"certified-operators-8df9f\" (UID: \"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f\") " pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.443713 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-catalog-content\") pod \"certified-operators-8df9f\" (UID: \"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f\") " pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.443793 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwrks\" (UniqueName: \"kubernetes.io/projected/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-kube-api-access-nwrks\") pod \"certified-operators-8df9f\" (UID: \"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f\") " pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.443816 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-utilities\") pod \"certified-operators-8df9f\" (UID: \"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f\") " pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.444259 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-utilities\") pod \"certified-operators-8df9f\" (UID: \"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f\") " pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.446320 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-catalog-content\") pod \"certified-operators-8df9f\" (UID: \"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f\") " pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.475218 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwrks\" (UniqueName: \"kubernetes.io/projected/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-kube-api-access-nwrks\") pod \"certified-operators-8df9f\" (UID: \"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f\") " pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:22 crc kubenswrapper[4784]: I0123 07:28:22.535316 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:23 crc kubenswrapper[4784]: I0123 07:28:23.144564 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8df9f"] Jan 23 07:28:23 crc kubenswrapper[4784]: I0123 07:28:23.332939 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8df9f" event={"ID":"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f","Type":"ContainerStarted","Data":"1e3b8bee4f54ea6a247e47ed120ee5289f602632a803ad1be5e2cb20f78280d3"} Jan 23 07:28:24 crc kubenswrapper[4784]: I0123 07:28:24.343043 4784 generic.go:334] "Generic (PLEG): container finished" podID="d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" containerID="625585ddc20e069f0052d3d60c244fe510ecf3056607ddf4e7d521bb29bf0d01" exitCode=0 Jan 23 07:28:24 crc kubenswrapper[4784]: I0123 07:28:24.343169 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8df9f" event={"ID":"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f","Type":"ContainerDied","Data":"625585ddc20e069f0052d3d60c244fe510ecf3056607ddf4e7d521bb29bf0d01"} Jan 23 07:28:28 crc kubenswrapper[4784]: I0123 07:28:28.387216 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8df9f" event={"ID":"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f","Type":"ContainerStarted","Data":"c4a2f5209b0a6e9e7de710f4c9f94317e85c55b5bc56d9b63f6ec4c79c2413b6"} Jan 23 07:28:29 crc kubenswrapper[4784]: I0123 07:28:29.404363 4784 generic.go:334] "Generic (PLEG): container finished" podID="d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" containerID="c4a2f5209b0a6e9e7de710f4c9f94317e85c55b5bc56d9b63f6ec4c79c2413b6" exitCode=0 Jan 23 07:28:29 crc kubenswrapper[4784]: I0123 07:28:29.404455 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8df9f" event={"ID":"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f","Type":"ContainerDied","Data":"c4a2f5209b0a6e9e7de710f4c9f94317e85c55b5bc56d9b63f6ec4c79c2413b6"} Jan 23 07:28:30 crc kubenswrapper[4784]: I0123 07:28:30.418356 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8df9f" event={"ID":"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f","Type":"ContainerStarted","Data":"eef05d377bee9683568e9ceeb1e3cface1bd46928be3590af9b6e1dd364a6c7a"} Jan 23 07:28:30 crc kubenswrapper[4784]: I0123 07:28:30.446449 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8df9f" podStartSLOduration=2.737134854 podStartE2EDuration="8.446423924s" podCreationTimestamp="2026-01-23 07:28:22 +0000 UTC" firstStartedPulling="2026-01-23 07:28:24.345483876 +0000 UTC m=+4107.577991850" lastFinishedPulling="2026-01-23 07:28:30.054772946 +0000 UTC m=+4113.287280920" observedRunningTime="2026-01-23 07:28:30.4430083 +0000 UTC m=+4113.675516294" watchObservedRunningTime="2026-01-23 07:28:30.446423924 +0000 UTC m=+4113.678931898" Jan 23 07:28:32 crc kubenswrapper[4784]: I0123 07:28:32.536855 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:32 crc kubenswrapper[4784]: I0123 07:28:32.537638 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:33 crc kubenswrapper[4784]: I0123 07:28:33.595823 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8df9f" podUID="d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" containerName="registry-server" probeResult="failure" output=< Jan 23 07:28:33 crc kubenswrapper[4784]: timeout: failed to connect service ":50051" within 1s Jan 23 07:28:33 crc kubenswrapper[4784]: > Jan 23 07:28:42 crc kubenswrapper[4784]: I0123 07:28:42.596086 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:42 crc kubenswrapper[4784]: I0123 07:28:42.668578 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:42 crc kubenswrapper[4784]: I0123 07:28:42.844044 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8df9f"] Jan 23 07:28:44 crc kubenswrapper[4784]: I0123 07:28:44.592807 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8df9f" podUID="d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" containerName="registry-server" containerID="cri-o://eef05d377bee9683568e9ceeb1e3cface1bd46928be3590af9b6e1dd364a6c7a" gracePeriod=2 Jan 23 07:28:45 crc kubenswrapper[4784]: I0123 07:28:45.607525 4784 generic.go:334] "Generic (PLEG): container finished" podID="d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" containerID="eef05d377bee9683568e9ceeb1e3cface1bd46928be3590af9b6e1dd364a6c7a" exitCode=0 Jan 23 07:28:45 crc kubenswrapper[4784]: I0123 07:28:45.607616 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8df9f" event={"ID":"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f","Type":"ContainerDied","Data":"eef05d377bee9683568e9ceeb1e3cface1bd46928be3590af9b6e1dd364a6c7a"} Jan 23 07:28:46 crc kubenswrapper[4784]: I0123 07:28:46.747907 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:46 crc kubenswrapper[4784]: I0123 07:28:46.932526 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwrks\" (UniqueName: \"kubernetes.io/projected/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-kube-api-access-nwrks\") pod \"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f\" (UID: \"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f\") " Jan 23 07:28:46 crc kubenswrapper[4784]: I0123 07:28:46.932721 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-utilities\") pod \"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f\" (UID: \"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f\") " Jan 23 07:28:46 crc kubenswrapper[4784]: I0123 07:28:46.932837 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-catalog-content\") pod \"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f\" (UID: \"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f\") " Jan 23 07:28:46 crc kubenswrapper[4784]: I0123 07:28:46.935097 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-utilities" (OuterVolumeSpecName: "utilities") pod "d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" (UID: "d1a9e8a0-e0be-4d24-8437-bebfbf24df3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:28:46 crc kubenswrapper[4784]: I0123 07:28:46.946016 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-kube-api-access-nwrks" (OuterVolumeSpecName: "kube-api-access-nwrks") pod "d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" (UID: "d1a9e8a0-e0be-4d24-8437-bebfbf24df3f"). InnerVolumeSpecName "kube-api-access-nwrks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:28:46 crc kubenswrapper[4784]: I0123 07:28:46.996094 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" (UID: "d1a9e8a0-e0be-4d24-8437-bebfbf24df3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:28:47 crc kubenswrapper[4784]: I0123 07:28:47.037056 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:28:47 crc kubenswrapper[4784]: I0123 07:28:47.037090 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwrks\" (UniqueName: \"kubernetes.io/projected/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-kube-api-access-nwrks\") on node \"crc\" DevicePath \"\"" Jan 23 07:28:47 crc kubenswrapper[4784]: I0123 07:28:47.037105 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:28:47 crc kubenswrapper[4784]: I0123 07:28:47.652169 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8df9f" event={"ID":"d1a9e8a0-e0be-4d24-8437-bebfbf24df3f","Type":"ContainerDied","Data":"1e3b8bee4f54ea6a247e47ed120ee5289f602632a803ad1be5e2cb20f78280d3"} Jan 23 07:28:47 crc kubenswrapper[4784]: I0123 07:28:47.652255 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8df9f" Jan 23 07:28:47 crc kubenswrapper[4784]: I0123 07:28:47.652270 4784 scope.go:117] "RemoveContainer" containerID="eef05d377bee9683568e9ceeb1e3cface1bd46928be3590af9b6e1dd364a6c7a" Jan 23 07:28:47 crc kubenswrapper[4784]: I0123 07:28:47.692520 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8df9f"] Jan 23 07:28:47 crc kubenswrapper[4784]: I0123 07:28:47.694896 4784 scope.go:117] "RemoveContainer" containerID="c4a2f5209b0a6e9e7de710f4c9f94317e85c55b5bc56d9b63f6ec4c79c2413b6" Jan 23 07:28:47 crc kubenswrapper[4784]: I0123 07:28:47.705051 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8df9f"] Jan 23 07:28:47 crc kubenswrapper[4784]: I0123 07:28:47.787078 4784 scope.go:117] "RemoveContainer" containerID="625585ddc20e069f0052d3d60c244fe510ecf3056607ddf4e7d521bb29bf0d01" Jan 23 07:28:49 crc kubenswrapper[4784]: I0123 07:28:49.264843 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" path="/var/lib/kubelet/pods/d1a9e8a0-e0be-4d24-8437-bebfbf24df3f/volumes" Jan 23 07:29:53 crc kubenswrapper[4784]: I0123 07:29:53.603480 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:29:53 crc kubenswrapper[4784]: I0123 07:29:53.604658 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.210467 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v"] Jan 23 07:30:00 crc kubenswrapper[4784]: E0123 07:30:00.211902 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" containerName="registry-server" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.211952 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" containerName="registry-server" Jan 23 07:30:00 crc kubenswrapper[4784]: E0123 07:30:00.211975 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" containerName="extract-content" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.211981 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" containerName="extract-content" Jan 23 07:30:00 crc kubenswrapper[4784]: E0123 07:30:00.212014 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" containerName="extract-utilities" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.212021 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" containerName="extract-utilities" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.212225 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1a9e8a0-e0be-4d24-8437-bebfbf24df3f" containerName="registry-server" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.213096 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.215940 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.216536 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.222568 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v"] Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.338734 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8621175-4dc2-49c2-b6a9-4df990afffe2-secret-volume\") pod \"collect-profiles-29485890-xnj7v\" (UID: \"d8621175-4dc2-49c2-b6a9-4df990afffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.338923 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8621175-4dc2-49c2-b6a9-4df990afffe2-config-volume\") pod \"collect-profiles-29485890-xnj7v\" (UID: \"d8621175-4dc2-49c2-b6a9-4df990afffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.338967 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfwjn\" (UniqueName: \"kubernetes.io/projected/d8621175-4dc2-49c2-b6a9-4df990afffe2-kube-api-access-xfwjn\") pod \"collect-profiles-29485890-xnj7v\" (UID: \"d8621175-4dc2-49c2-b6a9-4df990afffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.443154 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8621175-4dc2-49c2-b6a9-4df990afffe2-secret-volume\") pod \"collect-profiles-29485890-xnj7v\" (UID: \"d8621175-4dc2-49c2-b6a9-4df990afffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.443270 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8621175-4dc2-49c2-b6a9-4df990afffe2-config-volume\") pod \"collect-profiles-29485890-xnj7v\" (UID: \"d8621175-4dc2-49c2-b6a9-4df990afffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.443302 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfwjn\" (UniqueName: \"kubernetes.io/projected/d8621175-4dc2-49c2-b6a9-4df990afffe2-kube-api-access-xfwjn\") pod \"collect-profiles-29485890-xnj7v\" (UID: \"d8621175-4dc2-49c2-b6a9-4df990afffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.445116 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8621175-4dc2-49c2-b6a9-4df990afffe2-config-volume\") pod \"collect-profiles-29485890-xnj7v\" (UID: \"d8621175-4dc2-49c2-b6a9-4df990afffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.458729 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8621175-4dc2-49c2-b6a9-4df990afffe2-secret-volume\") pod \"collect-profiles-29485890-xnj7v\" (UID: \"d8621175-4dc2-49c2-b6a9-4df990afffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.463870 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfwjn\" (UniqueName: \"kubernetes.io/projected/d8621175-4dc2-49c2-b6a9-4df990afffe2-kube-api-access-xfwjn\") pod \"collect-profiles-29485890-xnj7v\" (UID: \"d8621175-4dc2-49c2-b6a9-4df990afffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" Jan 23 07:30:00 crc kubenswrapper[4784]: I0123 07:30:00.550724 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" Jan 23 07:30:01 crc kubenswrapper[4784]: I0123 07:30:01.028880 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v"] Jan 23 07:30:01 crc kubenswrapper[4784]: I0123 07:30:01.512569 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" event={"ID":"d8621175-4dc2-49c2-b6a9-4df990afffe2","Type":"ContainerStarted","Data":"49ee1d0c353041f425d99382d951ddce5282f9717f8c802c90fa7c062f0aa285"} Jan 23 07:30:01 crc kubenswrapper[4784]: I0123 07:30:01.512638 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" event={"ID":"d8621175-4dc2-49c2-b6a9-4df990afffe2","Type":"ContainerStarted","Data":"cbbe37b46637379e8d4d5a57e9138d9dc5ad1b629de869aec74d63d4132ceba9"} Jan 23 07:30:01 crc kubenswrapper[4784]: I0123 07:30:01.535857 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" podStartSLOduration=1.535741714 podStartE2EDuration="1.535741714s" podCreationTimestamp="2026-01-23 07:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:30:01.529742875 +0000 UTC m=+4204.762250869" watchObservedRunningTime="2026-01-23 07:30:01.535741714 +0000 UTC m=+4204.768249698" Jan 23 07:30:02 crc kubenswrapper[4784]: I0123 07:30:02.545180 4784 generic.go:334] "Generic (PLEG): container finished" podID="d8621175-4dc2-49c2-b6a9-4df990afffe2" containerID="49ee1d0c353041f425d99382d951ddce5282f9717f8c802c90fa7c062f0aa285" exitCode=0 Jan 23 07:30:02 crc kubenswrapper[4784]: I0123 07:30:02.545244 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" event={"ID":"d8621175-4dc2-49c2-b6a9-4df990afffe2","Type":"ContainerDied","Data":"49ee1d0c353041f425d99382d951ddce5282f9717f8c802c90fa7c062f0aa285"} Jan 23 07:30:03 crc kubenswrapper[4784]: I0123 07:30:03.934876 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" Jan 23 07:30:04 crc kubenswrapper[4784]: I0123 07:30:04.133596 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8621175-4dc2-49c2-b6a9-4df990afffe2-config-volume\") pod \"d8621175-4dc2-49c2-b6a9-4df990afffe2\" (UID: \"d8621175-4dc2-49c2-b6a9-4df990afffe2\") " Jan 23 07:30:04 crc kubenswrapper[4784]: I0123 07:30:04.133738 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8621175-4dc2-49c2-b6a9-4df990afffe2-secret-volume\") pod \"d8621175-4dc2-49c2-b6a9-4df990afffe2\" (UID: \"d8621175-4dc2-49c2-b6a9-4df990afffe2\") " Jan 23 07:30:04 crc kubenswrapper[4784]: I0123 07:30:04.133810 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfwjn\" (UniqueName: \"kubernetes.io/projected/d8621175-4dc2-49c2-b6a9-4df990afffe2-kube-api-access-xfwjn\") pod \"d8621175-4dc2-49c2-b6a9-4df990afffe2\" (UID: \"d8621175-4dc2-49c2-b6a9-4df990afffe2\") " Jan 23 07:30:04 crc kubenswrapper[4784]: I0123 07:30:04.135084 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8621175-4dc2-49c2-b6a9-4df990afffe2-config-volume" (OuterVolumeSpecName: "config-volume") pod "d8621175-4dc2-49c2-b6a9-4df990afffe2" (UID: "d8621175-4dc2-49c2-b6a9-4df990afffe2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:30:04 crc kubenswrapper[4784]: I0123 07:30:04.141973 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8621175-4dc2-49c2-b6a9-4df990afffe2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d8621175-4dc2-49c2-b6a9-4df990afffe2" (UID: "d8621175-4dc2-49c2-b6a9-4df990afffe2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:30:04 crc kubenswrapper[4784]: I0123 07:30:04.144998 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8621175-4dc2-49c2-b6a9-4df990afffe2-kube-api-access-xfwjn" (OuterVolumeSpecName: "kube-api-access-xfwjn") pod "d8621175-4dc2-49c2-b6a9-4df990afffe2" (UID: "d8621175-4dc2-49c2-b6a9-4df990afffe2"). InnerVolumeSpecName "kube-api-access-xfwjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:30:04 crc kubenswrapper[4784]: I0123 07:30:04.237199 4784 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8621175-4dc2-49c2-b6a9-4df990afffe2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:30:04 crc kubenswrapper[4784]: I0123 07:30:04.237485 4784 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8621175-4dc2-49c2-b6a9-4df990afffe2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:30:04 crc kubenswrapper[4784]: I0123 07:30:04.237571 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfwjn\" (UniqueName: \"kubernetes.io/projected/d8621175-4dc2-49c2-b6a9-4df990afffe2-kube-api-access-xfwjn\") on node \"crc\" DevicePath \"\"" Jan 23 07:30:04 crc kubenswrapper[4784]: I0123 07:30:04.570245 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" event={"ID":"d8621175-4dc2-49c2-b6a9-4df990afffe2","Type":"ContainerDied","Data":"cbbe37b46637379e8d4d5a57e9138d9dc5ad1b629de869aec74d63d4132ceba9"} Jan 23 07:30:04 crc kubenswrapper[4784]: I0123 07:30:04.570654 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbbe37b46637379e8d4d5a57e9138d9dc5ad1b629de869aec74d63d4132ceba9" Jan 23 07:30:04 crc kubenswrapper[4784]: I0123 07:30:04.570397 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-xnj7v" Jan 23 07:30:04 crc kubenswrapper[4784]: I0123 07:30:04.620699 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h"] Jan 23 07:30:04 crc kubenswrapper[4784]: I0123 07:30:04.632706 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485845-w695h"] Jan 23 07:30:05 crc kubenswrapper[4784]: I0123 07:30:05.270078 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77883559-de68-40d3-9375-f9ee148ccf9b" path="/var/lib/kubelet/pods/77883559-de68-40d3-9375-f9ee148ccf9b/volumes" Jan 23 07:30:16 crc kubenswrapper[4784]: I0123 07:30:16.712567 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gwqlh"] Jan 23 07:30:16 crc kubenswrapper[4784]: E0123 07:30:16.715164 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8621175-4dc2-49c2-b6a9-4df990afffe2" containerName="collect-profiles" Jan 23 07:30:16 crc kubenswrapper[4784]: I0123 07:30:16.715195 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8621175-4dc2-49c2-b6a9-4df990afffe2" containerName="collect-profiles" Jan 23 07:30:16 crc kubenswrapper[4784]: I0123 07:30:16.715529 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8621175-4dc2-49c2-b6a9-4df990afffe2" containerName="collect-profiles" Jan 23 07:30:16 crc kubenswrapper[4784]: I0123 07:30:16.719888 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:16 crc kubenswrapper[4784]: I0123 07:30:16.729869 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gwqlh"] Jan 23 07:30:16 crc kubenswrapper[4784]: I0123 07:30:16.878976 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c043eb7d-84a4-488c-b47d-57b305e936ac-utilities\") pod \"community-operators-gwqlh\" (UID: \"c043eb7d-84a4-488c-b47d-57b305e936ac\") " pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:16 crc kubenswrapper[4784]: I0123 07:30:16.879421 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t897r\" (UniqueName: \"kubernetes.io/projected/c043eb7d-84a4-488c-b47d-57b305e936ac-kube-api-access-t897r\") pod \"community-operators-gwqlh\" (UID: \"c043eb7d-84a4-488c-b47d-57b305e936ac\") " pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:16 crc kubenswrapper[4784]: I0123 07:30:16.879478 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c043eb7d-84a4-488c-b47d-57b305e936ac-catalog-content\") pod \"community-operators-gwqlh\" (UID: \"c043eb7d-84a4-488c-b47d-57b305e936ac\") " pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:16 crc kubenswrapper[4784]: I0123 07:30:16.981898 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c043eb7d-84a4-488c-b47d-57b305e936ac-catalog-content\") pod \"community-operators-gwqlh\" (UID: \"c043eb7d-84a4-488c-b47d-57b305e936ac\") " pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:16 crc kubenswrapper[4784]: I0123 07:30:16.982192 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c043eb7d-84a4-488c-b47d-57b305e936ac-utilities\") pod \"community-operators-gwqlh\" (UID: \"c043eb7d-84a4-488c-b47d-57b305e936ac\") " pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:16 crc kubenswrapper[4784]: I0123 07:30:16.982254 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t897r\" (UniqueName: \"kubernetes.io/projected/c043eb7d-84a4-488c-b47d-57b305e936ac-kube-api-access-t897r\") pod \"community-operators-gwqlh\" (UID: \"c043eb7d-84a4-488c-b47d-57b305e936ac\") " pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:16 crc kubenswrapper[4784]: I0123 07:30:16.982383 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c043eb7d-84a4-488c-b47d-57b305e936ac-catalog-content\") pod \"community-operators-gwqlh\" (UID: \"c043eb7d-84a4-488c-b47d-57b305e936ac\") " pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:16 crc kubenswrapper[4784]: I0123 07:30:16.982785 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c043eb7d-84a4-488c-b47d-57b305e936ac-utilities\") pod \"community-operators-gwqlh\" (UID: \"c043eb7d-84a4-488c-b47d-57b305e936ac\") " pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:17 crc kubenswrapper[4784]: I0123 07:30:17.008083 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t897r\" (UniqueName: \"kubernetes.io/projected/c043eb7d-84a4-488c-b47d-57b305e936ac-kube-api-access-t897r\") pod \"community-operators-gwqlh\" (UID: \"c043eb7d-84a4-488c-b47d-57b305e936ac\") " pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:17 crc kubenswrapper[4784]: I0123 07:30:17.063311 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:17 crc kubenswrapper[4784]: I0123 07:30:17.613717 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gwqlh"] Jan 23 07:30:17 crc kubenswrapper[4784]: I0123 07:30:17.794655 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwqlh" event={"ID":"c043eb7d-84a4-488c-b47d-57b305e936ac","Type":"ContainerStarted","Data":"a9b4ea22993350c9aa89f76faf95238287c59206435af2975e136fdf4c83154e"} Jan 23 07:30:18 crc kubenswrapper[4784]: I0123 07:30:18.813083 4784 generic.go:334] "Generic (PLEG): container finished" podID="c043eb7d-84a4-488c-b47d-57b305e936ac" containerID="10f4a5e392fd1e4db67a10549d07b54be9339eedca9c9bb741db7f84cdfa3371" exitCode=0 Jan 23 07:30:18 crc kubenswrapper[4784]: I0123 07:30:18.813189 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwqlh" event={"ID":"c043eb7d-84a4-488c-b47d-57b305e936ac","Type":"ContainerDied","Data":"10f4a5e392fd1e4db67a10549d07b54be9339eedca9c9bb741db7f84cdfa3371"} Jan 23 07:30:22 crc kubenswrapper[4784]: I0123 07:30:22.870389 4784 generic.go:334] "Generic (PLEG): container finished" podID="c043eb7d-84a4-488c-b47d-57b305e936ac" containerID="d6cb4dd0102729218291de07b1a701c6649d0ee3c062e3d5dfa5041525702687" exitCode=0 Jan 23 07:30:22 crc kubenswrapper[4784]: I0123 07:30:22.870474 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwqlh" event={"ID":"c043eb7d-84a4-488c-b47d-57b305e936ac","Type":"ContainerDied","Data":"d6cb4dd0102729218291de07b1a701c6649d0ee3c062e3d5dfa5041525702687"} Jan 23 07:30:23 crc kubenswrapper[4784]: I0123 07:30:23.603516 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:30:23 crc kubenswrapper[4784]: I0123 07:30:23.603583 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:30:25 crc kubenswrapper[4784]: I0123 07:30:25.905816 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwqlh" event={"ID":"c043eb7d-84a4-488c-b47d-57b305e936ac","Type":"ContainerStarted","Data":"765d39521916a1891396cad065076a1bea64a1026d9580fdc22df0c8459c0067"} Jan 23 07:30:25 crc kubenswrapper[4784]: I0123 07:30:25.952712 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gwqlh" podStartSLOduration=3.574014574 podStartE2EDuration="9.952687007s" podCreationTimestamp="2026-01-23 07:30:16 +0000 UTC" firstStartedPulling="2026-01-23 07:30:18.817904449 +0000 UTC m=+4222.050412413" lastFinishedPulling="2026-01-23 07:30:25.196576882 +0000 UTC m=+4228.429084846" observedRunningTime="2026-01-23 07:30:25.933584626 +0000 UTC m=+4229.166092610" watchObservedRunningTime="2026-01-23 07:30:25.952687007 +0000 UTC m=+4229.185194981" Jan 23 07:30:27 crc kubenswrapper[4784]: I0123 07:30:27.064564 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:27 crc kubenswrapper[4784]: I0123 07:30:27.064926 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:28 crc kubenswrapper[4784]: I0123 07:30:28.121302 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gwqlh" podUID="c043eb7d-84a4-488c-b47d-57b305e936ac" containerName="registry-server" probeResult="failure" output=< Jan 23 07:30:28 crc kubenswrapper[4784]: timeout: failed to connect service ":50051" within 1s Jan 23 07:30:28 crc kubenswrapper[4784]: > Jan 23 07:30:37 crc kubenswrapper[4784]: I0123 07:30:37.133083 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:37 crc kubenswrapper[4784]: I0123 07:30:37.188780 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:37 crc kubenswrapper[4784]: I0123 07:30:37.376333 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gwqlh"] Jan 23 07:30:38 crc kubenswrapper[4784]: I0123 07:30:38.645864 4784 scope.go:117] "RemoveContainer" containerID="6462b177a2054edd05b2af376e02f699a1f2f96bacbda24887687c093699c490" Jan 23 07:30:39 crc kubenswrapper[4784]: I0123 07:30:39.068319 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gwqlh" podUID="c043eb7d-84a4-488c-b47d-57b305e936ac" containerName="registry-server" containerID="cri-o://765d39521916a1891396cad065076a1bea64a1026d9580fdc22df0c8459c0067" gracePeriod=2 Jan 23 07:30:39 crc kubenswrapper[4784]: I0123 07:30:39.710612 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:39 crc kubenswrapper[4784]: I0123 07:30:39.765819 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c043eb7d-84a4-488c-b47d-57b305e936ac-utilities\") pod \"c043eb7d-84a4-488c-b47d-57b305e936ac\" (UID: \"c043eb7d-84a4-488c-b47d-57b305e936ac\") " Jan 23 07:30:39 crc kubenswrapper[4784]: I0123 07:30:39.765964 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t897r\" (UniqueName: \"kubernetes.io/projected/c043eb7d-84a4-488c-b47d-57b305e936ac-kube-api-access-t897r\") pod \"c043eb7d-84a4-488c-b47d-57b305e936ac\" (UID: \"c043eb7d-84a4-488c-b47d-57b305e936ac\") " Jan 23 07:30:39 crc kubenswrapper[4784]: I0123 07:30:39.766227 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c043eb7d-84a4-488c-b47d-57b305e936ac-catalog-content\") pod \"c043eb7d-84a4-488c-b47d-57b305e936ac\" (UID: \"c043eb7d-84a4-488c-b47d-57b305e936ac\") " Jan 23 07:30:39 crc kubenswrapper[4784]: I0123 07:30:39.766668 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c043eb7d-84a4-488c-b47d-57b305e936ac-utilities" (OuterVolumeSpecName: "utilities") pod "c043eb7d-84a4-488c-b47d-57b305e936ac" (UID: "c043eb7d-84a4-488c-b47d-57b305e936ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:30:39 crc kubenswrapper[4784]: I0123 07:30:39.767354 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c043eb7d-84a4-488c-b47d-57b305e936ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:30:39 crc kubenswrapper[4784]: I0123 07:30:39.776223 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c043eb7d-84a4-488c-b47d-57b305e936ac-kube-api-access-t897r" (OuterVolumeSpecName: "kube-api-access-t897r") pod "c043eb7d-84a4-488c-b47d-57b305e936ac" (UID: "c043eb7d-84a4-488c-b47d-57b305e936ac"). InnerVolumeSpecName "kube-api-access-t897r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:30:39 crc kubenswrapper[4784]: I0123 07:30:39.815700 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c043eb7d-84a4-488c-b47d-57b305e936ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c043eb7d-84a4-488c-b47d-57b305e936ac" (UID: "c043eb7d-84a4-488c-b47d-57b305e936ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:30:39 crc kubenswrapper[4784]: I0123 07:30:39.869270 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c043eb7d-84a4-488c-b47d-57b305e936ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:30:39 crc kubenswrapper[4784]: I0123 07:30:39.869307 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t897r\" (UniqueName: \"kubernetes.io/projected/c043eb7d-84a4-488c-b47d-57b305e936ac-kube-api-access-t897r\") on node \"crc\" DevicePath \"\"" Jan 23 07:30:40 crc kubenswrapper[4784]: I0123 07:30:40.083403 4784 generic.go:334] "Generic (PLEG): container finished" podID="c043eb7d-84a4-488c-b47d-57b305e936ac" containerID="765d39521916a1891396cad065076a1bea64a1026d9580fdc22df0c8459c0067" exitCode=0 Jan 23 07:30:40 crc kubenswrapper[4784]: I0123 07:30:40.083455 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwqlh" Jan 23 07:30:40 crc kubenswrapper[4784]: I0123 07:30:40.083467 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwqlh" event={"ID":"c043eb7d-84a4-488c-b47d-57b305e936ac","Type":"ContainerDied","Data":"765d39521916a1891396cad065076a1bea64a1026d9580fdc22df0c8459c0067"} Jan 23 07:30:40 crc kubenswrapper[4784]: I0123 07:30:40.083516 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwqlh" event={"ID":"c043eb7d-84a4-488c-b47d-57b305e936ac","Type":"ContainerDied","Data":"a9b4ea22993350c9aa89f76faf95238287c59206435af2975e136fdf4c83154e"} Jan 23 07:30:40 crc kubenswrapper[4784]: I0123 07:30:40.083547 4784 scope.go:117] "RemoveContainer" containerID="765d39521916a1891396cad065076a1bea64a1026d9580fdc22df0c8459c0067" Jan 23 07:30:40 crc kubenswrapper[4784]: I0123 07:30:40.114122 4784 scope.go:117] "RemoveContainer" containerID="d6cb4dd0102729218291de07b1a701c6649d0ee3c062e3d5dfa5041525702687" Jan 23 07:30:40 crc kubenswrapper[4784]: I0123 07:30:40.124385 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gwqlh"] Jan 23 07:30:40 crc kubenswrapper[4784]: I0123 07:30:40.136555 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gwqlh"] Jan 23 07:30:40 crc kubenswrapper[4784]: I0123 07:30:40.153700 4784 scope.go:117] "RemoveContainer" containerID="10f4a5e392fd1e4db67a10549d07b54be9339eedca9c9bb741db7f84cdfa3371" Jan 23 07:30:40 crc kubenswrapper[4784]: I0123 07:30:40.176655 4784 scope.go:117] "RemoveContainer" containerID="765d39521916a1891396cad065076a1bea64a1026d9580fdc22df0c8459c0067" Jan 23 07:30:40 crc kubenswrapper[4784]: E0123 07:30:40.177427 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"765d39521916a1891396cad065076a1bea64a1026d9580fdc22df0c8459c0067\": container with ID starting with 765d39521916a1891396cad065076a1bea64a1026d9580fdc22df0c8459c0067 not found: ID does not exist" containerID="765d39521916a1891396cad065076a1bea64a1026d9580fdc22df0c8459c0067" Jan 23 07:30:40 crc kubenswrapper[4784]: I0123 07:30:40.177471 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"765d39521916a1891396cad065076a1bea64a1026d9580fdc22df0c8459c0067"} err="failed to get container status \"765d39521916a1891396cad065076a1bea64a1026d9580fdc22df0c8459c0067\": rpc error: code = NotFound desc = could not find container \"765d39521916a1891396cad065076a1bea64a1026d9580fdc22df0c8459c0067\": container with ID starting with 765d39521916a1891396cad065076a1bea64a1026d9580fdc22df0c8459c0067 not found: ID does not exist" Jan 23 07:30:40 crc kubenswrapper[4784]: I0123 07:30:40.177501 4784 scope.go:117] "RemoveContainer" containerID="d6cb4dd0102729218291de07b1a701c6649d0ee3c062e3d5dfa5041525702687" Jan 23 07:30:40 crc kubenswrapper[4784]: E0123 07:30:40.177933 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6cb4dd0102729218291de07b1a701c6649d0ee3c062e3d5dfa5041525702687\": container with ID starting with d6cb4dd0102729218291de07b1a701c6649d0ee3c062e3d5dfa5041525702687 not found: ID does not exist" containerID="d6cb4dd0102729218291de07b1a701c6649d0ee3c062e3d5dfa5041525702687" Jan 23 07:30:40 crc kubenswrapper[4784]: I0123 07:30:40.177959 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6cb4dd0102729218291de07b1a701c6649d0ee3c062e3d5dfa5041525702687"} err="failed to get container status \"d6cb4dd0102729218291de07b1a701c6649d0ee3c062e3d5dfa5041525702687\": rpc error: code = NotFound desc = could not find container \"d6cb4dd0102729218291de07b1a701c6649d0ee3c062e3d5dfa5041525702687\": container with ID starting with d6cb4dd0102729218291de07b1a701c6649d0ee3c062e3d5dfa5041525702687 not found: ID does not exist" Jan 23 07:30:40 crc kubenswrapper[4784]: I0123 07:30:40.177980 4784 scope.go:117] "RemoveContainer" containerID="10f4a5e392fd1e4db67a10549d07b54be9339eedca9c9bb741db7f84cdfa3371" Jan 23 07:30:40 crc kubenswrapper[4784]: E0123 07:30:40.178402 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10f4a5e392fd1e4db67a10549d07b54be9339eedca9c9bb741db7f84cdfa3371\": container with ID starting with 10f4a5e392fd1e4db67a10549d07b54be9339eedca9c9bb741db7f84cdfa3371 not found: ID does not exist" containerID="10f4a5e392fd1e4db67a10549d07b54be9339eedca9c9bb741db7f84cdfa3371" Jan 23 07:30:40 crc kubenswrapper[4784]: I0123 07:30:40.178429 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10f4a5e392fd1e4db67a10549d07b54be9339eedca9c9bb741db7f84cdfa3371"} err="failed to get container status \"10f4a5e392fd1e4db67a10549d07b54be9339eedca9c9bb741db7f84cdfa3371\": rpc error: code = NotFound desc = could not find container \"10f4a5e392fd1e4db67a10549d07b54be9339eedca9c9bb741db7f84cdfa3371\": container with ID starting with 10f4a5e392fd1e4db67a10549d07b54be9339eedca9c9bb741db7f84cdfa3371 not found: ID does not exist" Jan 23 07:30:41 crc kubenswrapper[4784]: I0123 07:30:41.267436 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c043eb7d-84a4-488c-b47d-57b305e936ac" path="/var/lib/kubelet/pods/c043eb7d-84a4-488c-b47d-57b305e936ac/volumes" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.418598 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wnkds"] Jan 23 07:30:51 crc kubenswrapper[4784]: E0123 07:30:51.422373 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c043eb7d-84a4-488c-b47d-57b305e936ac" containerName="extract-utilities" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.422396 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c043eb7d-84a4-488c-b47d-57b305e936ac" containerName="extract-utilities" Jan 23 07:30:51 crc kubenswrapper[4784]: E0123 07:30:51.422409 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c043eb7d-84a4-488c-b47d-57b305e936ac" containerName="registry-server" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.422416 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c043eb7d-84a4-488c-b47d-57b305e936ac" containerName="registry-server" Jan 23 07:30:51 crc kubenswrapper[4784]: E0123 07:30:51.422446 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c043eb7d-84a4-488c-b47d-57b305e936ac" containerName="extract-content" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.422452 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c043eb7d-84a4-488c-b47d-57b305e936ac" containerName="extract-content" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.422655 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c043eb7d-84a4-488c-b47d-57b305e936ac" containerName="registry-server" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.424204 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.438440 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wnkds"] Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.530110 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1639820c-834e-48f3-923e-9fba2f1ca0d8-utilities\") pod \"redhat-operators-wnkds\" (UID: \"1639820c-834e-48f3-923e-9fba2f1ca0d8\") " pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.530535 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szx4r\" (UniqueName: \"kubernetes.io/projected/1639820c-834e-48f3-923e-9fba2f1ca0d8-kube-api-access-szx4r\") pod \"redhat-operators-wnkds\" (UID: \"1639820c-834e-48f3-923e-9fba2f1ca0d8\") " pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.531126 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1639820c-834e-48f3-923e-9fba2f1ca0d8-catalog-content\") pod \"redhat-operators-wnkds\" (UID: \"1639820c-834e-48f3-923e-9fba2f1ca0d8\") " pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.633053 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1639820c-834e-48f3-923e-9fba2f1ca0d8-utilities\") pod \"redhat-operators-wnkds\" (UID: \"1639820c-834e-48f3-923e-9fba2f1ca0d8\") " pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.633133 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szx4r\" (UniqueName: \"kubernetes.io/projected/1639820c-834e-48f3-923e-9fba2f1ca0d8-kube-api-access-szx4r\") pod \"redhat-operators-wnkds\" (UID: \"1639820c-834e-48f3-923e-9fba2f1ca0d8\") " pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.633246 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1639820c-834e-48f3-923e-9fba2f1ca0d8-catalog-content\") pod \"redhat-operators-wnkds\" (UID: \"1639820c-834e-48f3-923e-9fba2f1ca0d8\") " pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.633653 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1639820c-834e-48f3-923e-9fba2f1ca0d8-utilities\") pod \"redhat-operators-wnkds\" (UID: \"1639820c-834e-48f3-923e-9fba2f1ca0d8\") " pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.633673 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1639820c-834e-48f3-923e-9fba2f1ca0d8-catalog-content\") pod \"redhat-operators-wnkds\" (UID: \"1639820c-834e-48f3-923e-9fba2f1ca0d8\") " pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.661724 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szx4r\" (UniqueName: \"kubernetes.io/projected/1639820c-834e-48f3-923e-9fba2f1ca0d8-kube-api-access-szx4r\") pod \"redhat-operators-wnkds\" (UID: \"1639820c-834e-48f3-923e-9fba2f1ca0d8\") " pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:30:51 crc kubenswrapper[4784]: I0123 07:30:51.764393 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:30:52 crc kubenswrapper[4784]: I0123 07:30:52.259039 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wnkds"] Jan 23 07:30:53 crc kubenswrapper[4784]: I0123 07:30:53.217031 4784 generic.go:334] "Generic (PLEG): container finished" podID="1639820c-834e-48f3-923e-9fba2f1ca0d8" containerID="99ffec2551a72896301c49a8bf929ffee881aad5d908fe12f0eea49fdce3472c" exitCode=0 Jan 23 07:30:53 crc kubenswrapper[4784]: I0123 07:30:53.217094 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnkds" event={"ID":"1639820c-834e-48f3-923e-9fba2f1ca0d8","Type":"ContainerDied","Data":"99ffec2551a72896301c49a8bf929ffee881aad5d908fe12f0eea49fdce3472c"} Jan 23 07:30:53 crc kubenswrapper[4784]: I0123 07:30:53.217305 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnkds" event={"ID":"1639820c-834e-48f3-923e-9fba2f1ca0d8","Type":"ContainerStarted","Data":"80e96f9188961e762921cf0dd43cfda7a34a3951e4f8c929608055d7bbb0d099"} Jan 23 07:30:53 crc kubenswrapper[4784]: I0123 07:30:53.603094 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:30:53 crc kubenswrapper[4784]: I0123 07:30:53.603547 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:30:53 crc kubenswrapper[4784]: I0123 07:30:53.603615 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 07:30:53 crc kubenswrapper[4784]: I0123 07:30:53.604825 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"994d3bcdb549373bc6598290555b55b409dfcc798f022bdc875fe89efe149218"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:30:53 crc kubenswrapper[4784]: I0123 07:30:53.604945 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://994d3bcdb549373bc6598290555b55b409dfcc798f022bdc875fe89efe149218" gracePeriod=600 Jan 23 07:30:54 crc kubenswrapper[4784]: I0123 07:30:54.234949 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="994d3bcdb549373bc6598290555b55b409dfcc798f022bdc875fe89efe149218" exitCode=0 Jan 23 07:30:54 crc kubenswrapper[4784]: I0123 07:30:54.235103 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"994d3bcdb549373bc6598290555b55b409dfcc798f022bdc875fe89efe149218"} Jan 23 07:30:54 crc kubenswrapper[4784]: I0123 07:30:54.235746 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1"} Jan 23 07:30:54 crc kubenswrapper[4784]: I0123 07:30:54.235823 4784 scope.go:117] "RemoveContainer" containerID="d1263c6448830bc62887ac961fe5f4e41067f9b7a38822639eafd898f99842ec" Jan 23 07:31:00 crc kubenswrapper[4784]: I0123 07:31:00.306620 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnkds" event={"ID":"1639820c-834e-48f3-923e-9fba2f1ca0d8","Type":"ContainerStarted","Data":"cdf915b5735e4c75a06e5773ec89009cbc85b3755e5546a5eaf84cd3cc56434b"} Jan 23 07:31:01 crc kubenswrapper[4784]: I0123 07:31:01.317883 4784 generic.go:334] "Generic (PLEG): container finished" podID="1639820c-834e-48f3-923e-9fba2f1ca0d8" containerID="cdf915b5735e4c75a06e5773ec89009cbc85b3755e5546a5eaf84cd3cc56434b" exitCode=0 Jan 23 07:31:01 crc kubenswrapper[4784]: I0123 07:31:01.317979 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnkds" event={"ID":"1639820c-834e-48f3-923e-9fba2f1ca0d8","Type":"ContainerDied","Data":"cdf915b5735e4c75a06e5773ec89009cbc85b3755e5546a5eaf84cd3cc56434b"} Jan 23 07:31:07 crc kubenswrapper[4784]: I0123 07:31:07.239777 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:31:07 crc kubenswrapper[4784]: I0123 07:31:07.240484 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 07:31:07 crc kubenswrapper[4784]: I0123 07:31:07.239824 4784 patch_prober.go:28] interesting pod/router-default-5444994796-gvzxz container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:31:07 crc kubenswrapper[4784]: I0123 07:31:07.240631 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-gvzxz" podUID="0d1c5a4a-d067-4ab8-b623-82a192c3bb07" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 07:31:10 crc kubenswrapper[4784]: I0123 07:31:10.431625 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnkds" event={"ID":"1639820c-834e-48f3-923e-9fba2f1ca0d8","Type":"ContainerStarted","Data":"f2d3e98dba0e57e46a74d625e96a3ecd04dea1be6d4a833edb1262aeec561cb3"} Jan 23 07:31:10 crc kubenswrapper[4784]: I0123 07:31:10.456420 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wnkds" podStartSLOduration=3.228106372 podStartE2EDuration="19.456393503s" podCreationTimestamp="2026-01-23 07:30:51 +0000 UTC" firstStartedPulling="2026-01-23 07:30:53.22003371 +0000 UTC m=+4256.452541684" lastFinishedPulling="2026-01-23 07:31:09.448320841 +0000 UTC m=+4272.680828815" observedRunningTime="2026-01-23 07:31:10.45266929 +0000 UTC m=+4273.685177304" watchObservedRunningTime="2026-01-23 07:31:10.456393503 +0000 UTC m=+4273.688901477" Jan 23 07:31:11 crc kubenswrapper[4784]: I0123 07:31:11.764624 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:31:11 crc kubenswrapper[4784]: I0123 07:31:11.765068 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:31:12 crc kubenswrapper[4784]: I0123 07:31:12.840371 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wnkds" podUID="1639820c-834e-48f3-923e-9fba2f1ca0d8" containerName="registry-server" probeResult="failure" output=< Jan 23 07:31:12 crc kubenswrapper[4784]: timeout: failed to connect service ":50051" within 1s Jan 23 07:31:12 crc kubenswrapper[4784]: > Jan 23 07:31:21 crc kubenswrapper[4784]: I0123 07:31:21.831818 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:31:21 crc kubenswrapper[4784]: I0123 07:31:21.911036 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:31:22 crc kubenswrapper[4784]: I0123 07:31:22.632816 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wnkds"] Jan 23 07:31:23 crc kubenswrapper[4784]: I0123 07:31:23.578655 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wnkds" podUID="1639820c-834e-48f3-923e-9fba2f1ca0d8" containerName="registry-server" containerID="cri-o://f2d3e98dba0e57e46a74d625e96a3ecd04dea1be6d4a833edb1262aeec561cb3" gracePeriod=2 Jan 23 07:31:24 crc kubenswrapper[4784]: I0123 07:31:24.596254 4784 generic.go:334] "Generic (PLEG): container finished" podID="1639820c-834e-48f3-923e-9fba2f1ca0d8" containerID="f2d3e98dba0e57e46a74d625e96a3ecd04dea1be6d4a833edb1262aeec561cb3" exitCode=0 Jan 23 07:31:24 crc kubenswrapper[4784]: I0123 07:31:24.596312 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnkds" event={"ID":"1639820c-834e-48f3-923e-9fba2f1ca0d8","Type":"ContainerDied","Data":"f2d3e98dba0e57e46a74d625e96a3ecd04dea1be6d4a833edb1262aeec561cb3"} Jan 23 07:31:24 crc kubenswrapper[4784]: I0123 07:31:24.861240 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:31:24 crc kubenswrapper[4784]: I0123 07:31:24.962330 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szx4r\" (UniqueName: \"kubernetes.io/projected/1639820c-834e-48f3-923e-9fba2f1ca0d8-kube-api-access-szx4r\") pod \"1639820c-834e-48f3-923e-9fba2f1ca0d8\" (UID: \"1639820c-834e-48f3-923e-9fba2f1ca0d8\") " Jan 23 07:31:24 crc kubenswrapper[4784]: I0123 07:31:24.962474 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1639820c-834e-48f3-923e-9fba2f1ca0d8-catalog-content\") pod \"1639820c-834e-48f3-923e-9fba2f1ca0d8\" (UID: \"1639820c-834e-48f3-923e-9fba2f1ca0d8\") " Jan 23 07:31:24 crc kubenswrapper[4784]: I0123 07:31:24.962565 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1639820c-834e-48f3-923e-9fba2f1ca0d8-utilities\") pod \"1639820c-834e-48f3-923e-9fba2f1ca0d8\" (UID: \"1639820c-834e-48f3-923e-9fba2f1ca0d8\") " Jan 23 07:31:24 crc kubenswrapper[4784]: I0123 07:31:24.963505 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1639820c-834e-48f3-923e-9fba2f1ca0d8-utilities" (OuterVolumeSpecName: "utilities") pod "1639820c-834e-48f3-923e-9fba2f1ca0d8" (UID: "1639820c-834e-48f3-923e-9fba2f1ca0d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:31:24 crc kubenswrapper[4784]: I0123 07:31:24.963944 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1639820c-834e-48f3-923e-9fba2f1ca0d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:31:24 crc kubenswrapper[4784]: I0123 07:31:24.969046 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1639820c-834e-48f3-923e-9fba2f1ca0d8-kube-api-access-szx4r" (OuterVolumeSpecName: "kube-api-access-szx4r") pod "1639820c-834e-48f3-923e-9fba2f1ca0d8" (UID: "1639820c-834e-48f3-923e-9fba2f1ca0d8"). InnerVolumeSpecName "kube-api-access-szx4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:31:25 crc kubenswrapper[4784]: I0123 07:31:25.066495 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szx4r\" (UniqueName: \"kubernetes.io/projected/1639820c-834e-48f3-923e-9fba2f1ca0d8-kube-api-access-szx4r\") on node \"crc\" DevicePath \"\"" Jan 23 07:31:25 crc kubenswrapper[4784]: I0123 07:31:25.098385 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1639820c-834e-48f3-923e-9fba2f1ca0d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1639820c-834e-48f3-923e-9fba2f1ca0d8" (UID: "1639820c-834e-48f3-923e-9fba2f1ca0d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:31:25 crc kubenswrapper[4784]: I0123 07:31:25.169867 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1639820c-834e-48f3-923e-9fba2f1ca0d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:31:25 crc kubenswrapper[4784]: I0123 07:31:25.611107 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnkds" event={"ID":"1639820c-834e-48f3-923e-9fba2f1ca0d8","Type":"ContainerDied","Data":"80e96f9188961e762921cf0dd43cfda7a34a3951e4f8c929608055d7bbb0d099"} Jan 23 07:31:25 crc kubenswrapper[4784]: I0123 07:31:25.611164 4784 scope.go:117] "RemoveContainer" containerID="f2d3e98dba0e57e46a74d625e96a3ecd04dea1be6d4a833edb1262aeec561cb3" Jan 23 07:31:25 crc kubenswrapper[4784]: I0123 07:31:25.611191 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wnkds" Jan 23 07:31:25 crc kubenswrapper[4784]: I0123 07:31:25.650158 4784 scope.go:117] "RemoveContainer" containerID="cdf915b5735e4c75a06e5773ec89009cbc85b3755e5546a5eaf84cd3cc56434b" Jan 23 07:31:25 crc kubenswrapper[4784]: I0123 07:31:25.657212 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wnkds"] Jan 23 07:31:25 crc kubenswrapper[4784]: I0123 07:31:25.680641 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wnkds"] Jan 23 07:31:25 crc kubenswrapper[4784]: I0123 07:31:25.693861 4784 scope.go:117] "RemoveContainer" containerID="99ffec2551a72896301c49a8bf929ffee881aad5d908fe12f0eea49fdce3472c" Jan 23 07:31:27 crc kubenswrapper[4784]: I0123 07:31:27.266441 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1639820c-834e-48f3-923e-9fba2f1ca0d8" path="/var/lib/kubelet/pods/1639820c-834e-48f3-923e-9fba2f1ca0d8/volumes" Jan 23 07:32:53 crc kubenswrapper[4784]: I0123 07:32:53.603514 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:32:53 crc kubenswrapper[4784]: I0123 07:32:53.604031 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:33:23 crc kubenswrapper[4784]: I0123 07:33:23.603011 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:33:23 crc kubenswrapper[4784]: I0123 07:33:23.603841 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:33:53 crc kubenswrapper[4784]: I0123 07:33:53.603042 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:33:53 crc kubenswrapper[4784]: I0123 07:33:53.603823 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:33:53 crc kubenswrapper[4784]: I0123 07:33:53.603909 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 07:33:53 crc kubenswrapper[4784]: I0123 07:33:53.604935 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:33:53 crc kubenswrapper[4784]: I0123 07:33:53.605012 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" gracePeriod=600 Jan 23 07:33:53 crc kubenswrapper[4784]: E0123 07:33:53.747417 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:33:54 crc kubenswrapper[4784]: I0123 07:33:54.154716 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" exitCode=0 Jan 23 07:33:54 crc kubenswrapper[4784]: I0123 07:33:54.154800 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1"} Jan 23 07:33:54 crc kubenswrapper[4784]: I0123 07:33:54.154893 4784 scope.go:117] "RemoveContainer" containerID="994d3bcdb549373bc6598290555b55b409dfcc798f022bdc875fe89efe149218" Jan 23 07:33:54 crc kubenswrapper[4784]: I0123 07:33:54.156021 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:33:54 crc kubenswrapper[4784]: E0123 07:33:54.157313 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:34:05 crc kubenswrapper[4784]: I0123 07:34:05.254489 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:34:05 crc kubenswrapper[4784]: E0123 07:34:05.255506 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:34:17 crc kubenswrapper[4784]: I0123 07:34:17.263416 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:34:17 crc kubenswrapper[4784]: E0123 07:34:17.264185 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:34:32 crc kubenswrapper[4784]: I0123 07:34:32.254351 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:34:32 crc kubenswrapper[4784]: E0123 07:34:32.255236 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:34:43 crc kubenswrapper[4784]: I0123 07:34:43.254361 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:34:43 crc kubenswrapper[4784]: E0123 07:34:43.255480 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:34:58 crc kubenswrapper[4784]: I0123 07:34:58.253562 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:34:58 crc kubenswrapper[4784]: E0123 07:34:58.255647 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:35:09 crc kubenswrapper[4784]: I0123 07:35:09.255175 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:35:09 crc kubenswrapper[4784]: E0123 07:35:09.258864 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:35:24 crc kubenswrapper[4784]: I0123 07:35:24.254035 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:35:24 crc kubenswrapper[4784]: E0123 07:35:24.254976 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:35:38 crc kubenswrapper[4784]: I0123 07:35:38.254463 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:35:38 crc kubenswrapper[4784]: E0123 07:35:38.256922 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:35:52 crc kubenswrapper[4784]: I0123 07:35:52.254344 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:35:52 crc kubenswrapper[4784]: E0123 07:35:52.255024 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:36:04 crc kubenswrapper[4784]: I0123 07:36:04.799151 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="87ac961b-d41b-43ef-b55e-07b0cf093e56" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.185:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:05 crc kubenswrapper[4784]: I0123 07:36:05.254141 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:36:05 crc kubenswrapper[4784]: E0123 07:36:05.254648 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:36:06 crc kubenswrapper[4784]: I0123 07:36:06.775420 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="263b6093-4133-4159-b83a-32199b46fa5d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 23 07:36:09 crc kubenswrapper[4784]: I0123 07:36:09.842033 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="87ac961b-d41b-43ef-b55e-07b0cf093e56" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.185:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:12 crc kubenswrapper[4784]: I0123 07:36:12.528976 4784 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-srwlm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:12 crc kubenswrapper[4784]: I0123 07:36:12.529431 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-srwlm" podUID="4180fe07-d016-4462-8f55-9da994cc6827" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.64:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:12 crc kubenswrapper[4784]: I0123 07:36:12.774598 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="263b6093-4133-4159-b83a-32199b46fa5d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 23 07:36:14 crc kubenswrapper[4784]: I0123 07:36:14.883974 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="87ac961b-d41b-43ef-b55e-07b0cf093e56" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.185:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:14 crc kubenswrapper[4784]: I0123 07:36:14.884096 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 07:36:14 crc kubenswrapper[4784]: I0123 07:36:14.885069 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"dc1648fe152d72b6b402d1cde063c6ff10a8ad784e1dc8b191d3097c38f8fb57"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Jan 23 07:36:14 crc kubenswrapper[4784]: I0123 07:36:14.885534 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="87ac961b-d41b-43ef-b55e-07b0cf093e56" containerName="cinder-scheduler" containerID="cri-o://dc1648fe152d72b6b402d1cde063c6ff10a8ad784e1dc8b191d3097c38f8fb57" gracePeriod=30 Jan 23 07:36:17 crc kubenswrapper[4784]: I0123 07:36:17.267935 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:36:17 crc kubenswrapper[4784]: E0123 07:36:17.269981 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:36:17 crc kubenswrapper[4784]: I0123 07:36:17.775647 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="263b6093-4133-4159-b83a-32199b46fa5d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 23 07:36:17 crc kubenswrapper[4784]: I0123 07:36:17.776574 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 23 07:36:17 crc kubenswrapper[4784]: I0123 07:36:17.779024 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"b32fc5b188b9c182a1519e7a319ffd0e2844c19457eea65e0751cd078b0e6c10"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 23 07:36:17 crc kubenswrapper[4784]: I0123 07:36:17.779326 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="263b6093-4133-4159-b83a-32199b46fa5d" containerName="ceilometer-central-agent" containerID="cri-o://b32fc5b188b9c182a1519e7a319ffd0e2844c19457eea65e0751cd078b0e6c10" gracePeriod=30 Jan 23 07:36:21 crc kubenswrapper[4784]: I0123 07:36:21.645077 4784 patch_prober.go:28] interesting pod/controller-manager-d5d5977cb-wwrnl container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:21 crc kubenswrapper[4784]: I0123 07:36:21.645893 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" podUID="4cf2e2b0-09d4-411b-9a83-1b3b409368be" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:21 crc kubenswrapper[4784]: I0123 07:36:21.645318 4784 patch_prober.go:28] interesting pod/controller-manager-d5d5977cb-wwrnl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:21 crc kubenswrapper[4784]: I0123 07:36:21.646236 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" podUID="4cf2e2b0-09d4-411b-9a83-1b3b409368be" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:29 crc kubenswrapper[4784]: I0123 07:36:29.253641 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:36:29 crc kubenswrapper[4784]: E0123 07:36:29.254657 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:36:29 crc kubenswrapper[4784]: I0123 07:36:29.651372 4784 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-qc9lz container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.67:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:29 crc kubenswrapper[4784]: I0123 07:36:29.651965 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" podUID="c87ee378-b6b8-4c35-a49a-42a09402ba7d" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.67:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:30 crc kubenswrapper[4784]: I0123 07:36:30.740680 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="4e84d3df-4011-472a-9b95-9ed21dea27d5" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.213:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:31 crc kubenswrapper[4784]: I0123 07:36:31.645398 4784 patch_prober.go:28] interesting pod/controller-manager-d5d5977cb-wwrnl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:31 crc kubenswrapper[4784]: I0123 07:36:31.645474 4784 patch_prober.go:28] interesting pod/controller-manager-d5d5977cb-wwrnl container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:31 crc kubenswrapper[4784]: I0123 07:36:31.645552 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" podUID="4cf2e2b0-09d4-411b-9a83-1b3b409368be" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:31 crc kubenswrapper[4784]: I0123 07:36:31.645476 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" podUID="4cf2e2b0-09d4-411b-9a83-1b3b409368be" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:31 crc kubenswrapper[4784]: I0123 07:36:31.779474 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="263b6093-4133-4159-b83a-32199b46fa5d" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 23 07:36:40 crc kubenswrapper[4784]: I0123 07:36:40.740626 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="4e84d3df-4011-472a-9b95-9ed21dea27d5" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.213:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:41 crc kubenswrapper[4784]: I0123 07:36:41.645450 4784 patch_prober.go:28] interesting pod/controller-manager-d5d5977cb-wwrnl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:41 crc kubenswrapper[4784]: I0123 07:36:41.645524 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" podUID="4cf2e2b0-09d4-411b-9a83-1b3b409368be" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:41 crc kubenswrapper[4784]: I0123 07:36:41.645586 4784 patch_prober.go:28] interesting pod/controller-manager-d5d5977cb-wwrnl container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:41 crc kubenswrapper[4784]: I0123 07:36:41.645683 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" podUID="4cf2e2b0-09d4-411b-9a83-1b3b409368be" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:41 crc kubenswrapper[4784]: I0123 07:36:41.645783 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 07:36:41 crc kubenswrapper[4784]: I0123 07:36:41.646911 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"8625052bb0955bc87f45ad4e831f7c7578bb8a47477a61db8467c2de14abb02a"} pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" containerMessage="Container controller-manager failed liveness probe, will be restarted" Jan 23 07:36:41 crc kubenswrapper[4784]: I0123 07:36:41.646984 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" podUID="4cf2e2b0-09d4-411b-9a83-1b3b409368be" containerName="controller-manager" containerID="cri-o://8625052bb0955bc87f45ad4e831f7c7578bb8a47477a61db8467c2de14abb02a" gracePeriod=30 Jan 23 07:36:42 crc kubenswrapper[4784]: E0123 07:36:42.011564 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:36:32Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:36:32Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:36:32Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:36:32Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": context deadline exceeded" Jan 23 07:36:43 crc kubenswrapper[4784]: I0123 07:36:43.254504 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:36:43 crc kubenswrapper[4784]: E0123 07:36:43.254794 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:36:44 crc kubenswrapper[4784]: I0123 07:36:44.286668 4784 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:44 crc kubenswrapper[4784]: I0123 07:36:44.287103 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:45 crc kubenswrapper[4784]: I0123 07:36:45.349901 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" podUID="89f228f9-5c69-4e48-bf35-01cc25b56ecd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 23 07:36:45 crc kubenswrapper[4784]: I0123 07:36:45.351347 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" podUID="89f228f9-5c69-4e48-bf35-01cc25b56ecd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 23 07:36:45 crc kubenswrapper[4784]: I0123 07:36:45.678464 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" podUID="89a376c8-b238-445d-99da-b85f3c421125" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 23 07:36:45 crc kubenswrapper[4784]: I0123 07:36:45.678914 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" podUID="89a376c8-b238-445d-99da-b85f3c421125" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 23 07:36:46 crc kubenswrapper[4784]: I0123 07:36:46.152841 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" podUID="3b13bce8-a43d-4833-9472-81f048a95be3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/healthz\": dial tcp 10.217.0.95:8081: connect: connection refused" Jan 23 07:36:46 crc kubenswrapper[4784]: I0123 07:36:46.152841 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" podUID="3b13bce8-a43d-4833-9472-81f048a95be3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": dial tcp 10.217.0.95:8081: connect: connection refused" Jan 23 07:36:49 crc kubenswrapper[4784]: I0123 07:36:49.107089 4784 patch_prober.go:28] interesting pod/console-58d484d7c8-xhg4d container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:49 crc kubenswrapper[4784]: I0123 07:36:49.107455 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-58d484d7c8-xhg4d" podUID="7979f0da-f16f-4e2a-8c1c-a667607ddcf2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.50:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:49 crc kubenswrapper[4784]: I0123 07:36:49.287658 4784 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:49 crc kubenswrapper[4784]: I0123 07:36:49.287778 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:49 crc kubenswrapper[4784]: E0123 07:36:49.985650 4784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:50 crc kubenswrapper[4784]: I0123 07:36:50.740617 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="4e84d3df-4011-472a-9b95-9ed21dea27d5" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.213:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:51 crc kubenswrapper[4784]: I0123 07:36:51.595330 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" podUID="500659da-123f-4500-9c50-2b7b3b7656df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 23 07:36:51 crc kubenswrapper[4784]: I0123 07:36:51.644871 4784 patch_prober.go:28] interesting pod/controller-manager-d5d5977cb-wwrnl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:51 crc kubenswrapper[4784]: I0123 07:36:51.644936 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" podUID="4cf2e2b0-09d4-411b-9a83-1b3b409368be" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:52 crc kubenswrapper[4784]: E0123 07:36:52.012585 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:52 crc kubenswrapper[4784]: I0123 07:36:52.061897 4784 patch_prober.go:28] interesting pod/oauth-openshift-5cf8f9f8d-5d2r6 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:52 crc kubenswrapper[4784]: I0123 07:36:52.061990 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-5cf8f9f8d-5d2r6" podUID="858edacc-ac93-4885-82b6-eea41f7eabdc" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:52 crc kubenswrapper[4784]: I0123 07:36:52.741914 4784 generic.go:334] "Generic (PLEG): container finished" podID="87ac961b-d41b-43ef-b55e-07b0cf093e56" containerID="dc1648fe152d72b6b402d1cde063c6ff10a8ad784e1dc8b191d3097c38f8fb57" exitCode=-1 Jan 23 07:36:52 crc kubenswrapper[4784]: I0123 07:36:52.741959 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"87ac961b-d41b-43ef-b55e-07b0cf093e56","Type":"ContainerDied","Data":"dc1648fe152d72b6b402d1cde063c6ff10a8ad784e1dc8b191d3097c38f8fb57"} Jan 23 07:36:54 crc kubenswrapper[4784]: I0123 07:36:54.289217 4784 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:54 crc kubenswrapper[4784]: I0123 07:36:54.289575 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:55 crc kubenswrapper[4784]: I0123 07:36:55.351570 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" podUID="89f228f9-5c69-4e48-bf35-01cc25b56ecd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 23 07:36:55 crc kubenswrapper[4784]: I0123 07:36:55.560015 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" podUID="1cd86a7e-7738-4a67-9c19-d34a70dbc9fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 23 07:36:55 crc kubenswrapper[4784]: I0123 07:36:55.678644 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" podUID="89a376c8-b238-445d-99da-b85f3c421125" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 23 07:36:55 crc kubenswrapper[4784]: I0123 07:36:55.759945 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" podUID="2c2a2d81-11ef-4146-ad50-8f7f39163253" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 23 07:36:55 crc kubenswrapper[4784]: I0123 07:36:55.829866 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" podUID="3a006f0b-6298-4509-9533-178b38906875" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": dial tcp 10.217.0.94:8081: connect: connection refused" Jan 23 07:36:55 crc kubenswrapper[4784]: I0123 07:36:55.946546 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" podUID="2e269fdb-0502-4d62-9a0d-15094fdd942c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": dial tcp 10.217.0.96:8081: connect: connection refused" Jan 23 07:36:56 crc kubenswrapper[4784]: I0123 07:36:56.056394 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" podUID="d6c01b10-21b9-4e8b-b051-6f148f468828" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": dial tcp 10.217.0.93:8081: connect: connection refused" Jan 23 07:36:56 crc kubenswrapper[4784]: I0123 07:36:56.153337 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" podUID="3b13bce8-a43d-4833-9472-81f048a95be3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": dial tcp 10.217.0.95:8081: connect: connection refused" Jan 23 07:36:58 crc kubenswrapper[4784]: I0123 07:36:58.107215 4784 patch_prober.go:28] interesting pod/console-58d484d7c8-xhg4d container/console namespace/openshift-console: Liveness probe status=failure output="Get \"https://10.217.0.50:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:58 crc kubenswrapper[4784]: I0123 07:36:58.107875 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/console-58d484d7c8-xhg4d" podUID="7979f0da-f16f-4e2a-8c1c-a667607ddcf2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.50:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:58 crc kubenswrapper[4784]: I0123 07:36:58.107988 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 07:36:58 crc kubenswrapper[4784]: I0123 07:36:58.108845 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"643fce47d779e6847ab0e9078cd9cd3469ebdc0475c20eda75ba9e533e566b32"} pod="openshift-console/console-58d484d7c8-xhg4d" containerMessage="Container console failed liveness probe, will be restarted" Jan 23 07:36:58 crc kubenswrapper[4784]: I0123 07:36:58.183447 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" podUID="409eb30c-947e-4d15-9b7c-8a73ba35ad70" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/healthz\": dial tcp 10.217.0.97:8081: connect: connection refused" Jan 23 07:36:58 crc kubenswrapper[4784]: I0123 07:36:58.183800 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" podUID="409eb30c-947e-4d15-9b7c-8a73ba35ad70" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/readyz\": dial tcp 10.217.0.97:8081: connect: connection refused" Jan 23 07:36:58 crc kubenswrapper[4784]: I0123 07:36:58.253999 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:36:58 crc kubenswrapper[4784]: E0123 07:36:58.254401 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:36:59 crc kubenswrapper[4784]: I0123 07:36:59.107314 4784 patch_prober.go:28] interesting pod/console-58d484d7c8-xhg4d container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:59 crc kubenswrapper[4784]: I0123 07:36:59.107403 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-58d484d7c8-xhg4d" podUID="7979f0da-f16f-4e2a-8c1c-a667607ddcf2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.50:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:59 crc kubenswrapper[4784]: I0123 07:36:59.287937 4784 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:59 crc kubenswrapper[4784]: I0123 07:36:59.288040 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:59 crc kubenswrapper[4784]: I0123 07:36:59.652947 4784 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-qc9lz container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.67:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:59 crc kubenswrapper[4784]: I0123 07:36:59.653068 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" podUID="c87ee378-b6b8-4c35-a49a-42a09402ba7d" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.67:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:59 crc kubenswrapper[4784]: I0123 07:36:59.653072 4784 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-qc9lz container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.67:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:36:59 crc kubenswrapper[4784]: I0123 07:36:59.653143 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-qc9lz" podUID="c87ee378-b6b8-4c35-a49a-42a09402ba7d" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.67:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:36:59 crc kubenswrapper[4784]: E0123 07:36:59.986648 4784 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 07:37:00 crc kubenswrapper[4784]: I0123 07:37:00.740892 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="4e84d3df-4011-472a-9b95-9ed21dea27d5" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.213:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 07:37:00 crc kubenswrapper[4784]: I0123 07:37:00.741008 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Jan 23 07:37:00 crc kubenswrapper[4784]: I0123 07:37:00.742214 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"8f8ea897b2fd3969d33cd61727046466040e10f536aa9ed3bc0d81a479272c3f"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Jan 23 07:37:00 crc kubenswrapper[4784]: I0123 07:37:00.742292 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4e84d3df-4011-472a-9b95-9ed21dea27d5" containerName="kube-state-metrics" containerID="cri-o://8f8ea897b2fd3969d33cd61727046466040e10f536aa9ed3bc0d81a479272c3f" gracePeriod=30 Jan 23 07:37:00 crc kubenswrapper[4784]: I0123 07:37:00.918725 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" podUID="758913f1-9ef1-4fe9-9d5f-2cb794fcddef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 23 07:37:00 crc kubenswrapper[4784]: I0123 07:37:00.919047 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" podUID="758913f1-9ef1-4fe9-9d5f-2cb794fcddef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 23 07:37:01 crc kubenswrapper[4784]: I0123 07:37:01.595554 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" podUID="500659da-123f-4500-9c50-2b7b3b7656df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 23 07:37:01 crc kubenswrapper[4784]: I0123 07:37:01.595644 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" podUID="500659da-123f-4500-9c50-2b7b3b7656df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 23 07:37:01 crc kubenswrapper[4784]: I0123 07:37:01.645431 4784 patch_prober.go:28] interesting pod/controller-manager-d5d5977cb-wwrnl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:37:01 crc kubenswrapper[4784]: I0123 07:37:01.645531 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" podUID="4cf2e2b0-09d4-411b-9a83-1b3b409368be" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:37:01 crc kubenswrapper[4784]: I0123 07:37:01.774330 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="263b6093-4133-4159-b83a-32199b46fa5d" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 23 07:37:02 crc kubenswrapper[4784]: E0123 07:37:02.013723 4784 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 07:37:03 crc kubenswrapper[4784]: I0123 07:37:03.726999 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" podUID="47ec951f-c0f2-40f8-9361-6ca608819c25" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.52:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 07:37:04 crc kubenswrapper[4784]: I0123 07:37:04.290387 4784 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:37:04 crc kubenswrapper[4784]: I0123 07:37:04.290576 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:37:04 crc kubenswrapper[4784]: I0123 07:37:04.290885 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:37:04 crc kubenswrapper[4784]: I0123 07:37:04.883060 4784 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]leaderElection failed: reason withheld Jan 23 07:37:04 crc kubenswrapper[4784]: [+]serviceaccount-token-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]deployment-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]daemonset-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]horizontal-pod-autoscaler-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]disruption-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]persistentvolume-binder-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]endpoints-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]replicationcontroller-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]garbage-collector-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]clusterrole-aggregation-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]ttl-after-finished-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]root-ca-certificate-publisher-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]taint-eviction-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]statefulset-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]certificatesigningrequest-signing-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]ephemeral-volume-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]persistentvolume-expander-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]persistentvolumeclaim-protection-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]legacy-serviceaccount-token-cleaner-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]cronjob-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]endpointslice-mirroring-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]pod-garbage-collector-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]namespace-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]certificatesigningrequest-cleaner-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]persistentvolume-protection-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]endpointslice-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]serviceaccount-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]certificatesigningrequest-approving-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]service-ca-certificate-publisher-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]validatingadmissionpolicy-status-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]resourcequota-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]job-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]persistentvolume-attach-detach-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]replicaset-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: [+]node-lifecycle-controller ok Jan 23 07:37:04 crc kubenswrapper[4784]: healthz check failed Jan 23 07:37:04 crc kubenswrapper[4784]: I0123 07:37:04.883487 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.076215 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" podUID="0e01c35c-c9bd-4b02-adb1-be49a504ea54" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.076341 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" podUID="0e01c35c-c9bd-4b02-adb1-be49a504ea54" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.133677 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" podUID="55f3492a-a5c0-460b-a93b-eb680b426a7c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.145988 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" podUID="55f3492a-a5c0-460b-a93b-eb680b426a7c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.157017 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.194406 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" podUID="f54aca80-78ad-4bda-905c-0a519a4f33ed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.194487 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" podUID="f54aca80-78ad-4bda-905c-0a519a4f33ed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.218863 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" podUID="417f228a-38b7-448a-980d-f64d6e113646" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.218928 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" podUID="417f228a-38b7-448a-980d-f64d6e113646" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.236605 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" podUID="4fa12cd4-f2bc-4863-8b67-e246a0becee3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.236609 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" podUID="4fa12cd4-f2bc-4863-8b67-e246a0becee3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.259941 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" podUID="be839066-996a-463b-b96c-a340d4e55ffd" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.292260 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" podUID="f809f5f2-7409-4d7e-b938-1efc34dc4c2f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.292592 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" podUID="f809f5f2-7409-4d7e-b938-1efc34dc4c2f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.351058 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" podUID="89f228f9-5c69-4e48-bf35-01cc25b56ecd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.352277 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" podUID="89f228f9-5c69-4e48-bf35-01cc25b56ecd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.352407 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.353309 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" podUID="89f228f9-5c69-4e48-bf35-01cc25b56ecd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.420823 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" podUID="7c5e978b-ac3c-439e-b2b1-ab025c130984" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": dial tcp 10.217.0.76:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.421089 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" podUID="7c5e978b-ac3c-439e-b2b1-ab025c130984" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": dial tcp 10.217.0.76:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.487019 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" podUID="138e85ae-26a7-45f3-ac25-61ece9cf8573" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.487036 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" podUID="138e85ae-26a7-45f3-ac25-61ece9cf8573" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.560859 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" podUID="1cd86a7e-7738-4a67-9c19-d34a70dbc9fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.560873 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" podUID="1cd86a7e-7738-4a67-9c19-d34a70dbc9fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.604027 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" podUID="be79eaa0-8040-4009-9f16-fcb56bffbff7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.604305 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" podUID="be79eaa0-8040-4009-9f16-fcb56bffbff7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.658864 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" podUID="9bc11b97-7610-4c0f-898a-bb42b42c37d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": dial tcp 10.217.0.87:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.658925 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" podUID="9bc11b97-7610-4c0f-898a-bb42b42c37d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": dial tcp 10.217.0.87:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.679182 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" podUID="89a376c8-b238-445d-99da-b85f3c421125" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.679185 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" podUID="89a376c8-b238-445d-99da-b85f3c421125" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.679784 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.680452 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" podUID="89a376c8-b238-445d-99da-b85f3c421125" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.718312 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" podUID="e8de3214-d1e9-4800-9ace-51a85b326df8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": dial tcp 10.217.0.90:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.725910 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" podUID="e8de3214-d1e9-4800-9ace-51a85b326df8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": dial tcp 10.217.0.90:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.740441 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="4e84d3df-4011-472a-9b95-9ed21dea27d5" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.213:8081/readyz\": dial tcp 10.217.0.213:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.759082 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" podUID="2c2a2d81-11ef-4146-ad50-8f7f39163253" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.759341 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" podUID="2c2a2d81-11ef-4146-ad50-8f7f39163253" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/healthz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.829882 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" podUID="3a006f0b-6298-4509-9533-178b38906875" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/healthz\": dial tcp 10.217.0.94:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.830044 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" podUID="3a006f0b-6298-4509-9533-178b38906875" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": dial tcp 10.217.0.94:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.946480 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" podUID="2e269fdb-0502-4d62-9a0d-15094fdd942c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/healthz\": dial tcp 10.217.0.96:8081: connect: connection refused" Jan 23 07:37:05 crc kubenswrapper[4784]: I0123 07:37:05.946516 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" podUID="2e269fdb-0502-4d62-9a0d-15094fdd942c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": dial tcp 10.217.0.96:8081: connect: connection refused" Jan 23 07:37:06 crc kubenswrapper[4784]: I0123 07:37:06.056028 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" podUID="d6c01b10-21b9-4e8b-b051-6f148f468828" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": dial tcp 10.217.0.93:8081: connect: connection refused" Jan 23 07:37:06 crc kubenswrapper[4784]: I0123 07:37:06.056064 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" podUID="d6c01b10-21b9-4e8b-b051-6f148f468828" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": dial tcp 10.217.0.93:8081: connect: connection refused" Jan 23 07:37:06 crc kubenswrapper[4784]: I0123 07:37:06.153449 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" podUID="3b13bce8-a43d-4833-9472-81f048a95be3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/healthz\": dial tcp 10.217.0.95:8081: connect: connection refused" Jan 23 07:37:06 crc kubenswrapper[4784]: I0123 07:37:06.153497 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" podUID="3b13bce8-a43d-4833-9472-81f048a95be3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": dial tcp 10.217.0.95:8081: connect: connection refused" Jan 23 07:37:06 crc kubenswrapper[4784]: I0123 07:37:06.153983 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" Jan 23 07:37:06 crc kubenswrapper[4784]: I0123 07:37:06.154638 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" podUID="3b13bce8-a43d-4833-9472-81f048a95be3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": dial tcp 10.217.0.95:8081: connect: connection refused" Jan 23 07:37:08 crc kubenswrapper[4784]: I0123 07:37:08.183204 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" podUID="409eb30c-947e-4d15-9b7c-8a73ba35ad70" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/readyz\": dial tcp 10.217.0.97:8081: connect: connection refused" Jan 23 07:37:08 crc kubenswrapper[4784]: I0123 07:37:08.612600 4784 generic.go:334] "Generic (PLEG): container finished" podID="4e84d3df-4011-472a-9b95-9ed21dea27d5" containerID="8f8ea897b2fd3969d33cd61727046466040e10f536aa9ed3bc0d81a479272c3f" exitCode=-1 Jan 23 07:37:08 crc kubenswrapper[4784]: I0123 07:37:08.612681 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4e84d3df-4011-472a-9b95-9ed21dea27d5","Type":"ContainerDied","Data":"8f8ea897b2fd3969d33cd61727046466040e10f536aa9ed3bc0d81a479272c3f"} Jan 23 07:37:10 crc kubenswrapper[4784]: I0123 07:37:10.645376 4784 patch_prober.go:28] interesting pod/controller-manager-d5d5977cb-wwrnl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 23 07:37:10 crc kubenswrapper[4784]: I0123 07:37:10.646088 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" podUID="4cf2e2b0-09d4-411b-9a83-1b3b409368be" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 23 07:37:10 crc kubenswrapper[4784]: I0123 07:37:10.676464 4784 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 07:37:10 crc kubenswrapper[4784]: I0123 07:37:10.676582 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 07:37:10 crc kubenswrapper[4784]: I0123 07:37:10.918914 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" podUID="758913f1-9ef1-4fe9-9d5f-2cb794fcddef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 23 07:37:11 crc kubenswrapper[4784]: I0123 07:37:11.596169 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" podUID="500659da-123f-4500-9c50-2b7b3b7656df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 23 07:37:11 crc kubenswrapper[4784]: I0123 07:37:11.596711 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 07:37:11 crc kubenswrapper[4784]: I0123 07:37:11.598461 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" podUID="500659da-123f-4500-9c50-2b7b3b7656df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 23 07:37:12 crc kubenswrapper[4784]: I0123 07:37:12.259701 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:37:12 crc kubenswrapper[4784]: E0123 07:37:12.285773 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:37:12 crc kubenswrapper[4784]: I0123 07:37:12.482842 4784 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 07:37:12 crc kubenswrapper[4784]: I0123 07:37:12.482945 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 07:37:12 crc kubenswrapper[4784]: I0123 07:37:12.685404 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" podUID="47ec951f-c0f2-40f8-9361-6ca608819c25" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.52:8080/readyz\": dial tcp 10.217.0.52:8080: connect: connection refused" Jan 23 07:37:13 crc kubenswrapper[4784]: I0123 07:37:13.787570 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-api-0" podUID="fd7b402f-9e10-4056-9911-be0cbb5fab92" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 07:37:13 crc kubenswrapper[4784]: I0123 07:37:13.787661 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="fd7b402f-9e10-4056-9911-be0cbb5fab92" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 07:37:13 crc kubenswrapper[4784]: I0123 07:37:13.787700 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="fd7b402f-9e10-4056-9911-be0cbb5fab92" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 07:37:13 crc kubenswrapper[4784]: I0123 07:37:13.788426 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-api-0" podUID="fd7b402f-9e10-4056-9911-be0cbb5fab92" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.076580 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" podUID="0e01c35c-c9bd-4b02-adb1-be49a504ea54" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.133816 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" podUID="55f3492a-a5c0-460b-a93b-eb680b426a7c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.194870 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" podUID="f54aca80-78ad-4bda-905c-0a519a4f33ed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.219159 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" podUID="417f228a-38b7-448a-980d-f64d6e113646" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.237548 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" podUID="4fa12cd4-f2bc-4863-8b67-e246a0becee3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.258576 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" podUID="be839066-996a-463b-b96c-a340d4e55ffd" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.258636 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" podUID="be839066-996a-463b-b96c-a340d4e55ffd" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.288661 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" podUID="f809f5f2-7409-4d7e-b938-1efc34dc4c2f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.351630 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" podUID="89f228f9-5c69-4e48-bf35-01cc25b56ecd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.419673 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" podUID="7c5e978b-ac3c-439e-b2b1-ab025c130984" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": dial tcp 10.217.0.76:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.486626 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" podUID="138e85ae-26a7-45f3-ac25-61ece9cf8573" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.561381 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" podUID="1cd86a7e-7738-4a67-9c19-d34a70dbc9fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.561497 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.562388 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" podUID="1cd86a7e-7738-4a67-9c19-d34a70dbc9fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.604072 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" podUID="be79eaa0-8040-4009-9f16-fcb56bffbff7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.658336 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" podUID="9bc11b97-7610-4c0f-898a-bb42b42c37d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": dial tcp 10.217.0.87:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.680718 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" podUID="89a376c8-b238-445d-99da-b85f3c421125" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.717646 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" podUID="e8de3214-d1e9-4800-9ace-51a85b326df8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": dial tcp 10.217.0.90:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.740533 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="4e84d3df-4011-472a-9b95-9ed21dea27d5" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.213:8081/readyz\": dial tcp 10.217.0.213:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.759832 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" podUID="2c2a2d81-11ef-4146-ad50-8f7f39163253" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.759953 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.830294 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" podUID="3a006f0b-6298-4509-9533-178b38906875" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": dial tcp 10.217.0.94:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.830822 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.832800 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" podUID="3a006f0b-6298-4509-9533-178b38906875" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": dial tcp 10.217.0.94:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.946614 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" podUID="2e269fdb-0502-4d62-9a0d-15094fdd942c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": dial tcp 10.217.0.96:8081: connect: connection refused" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.947089 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" Jan 23 07:37:15 crc kubenswrapper[4784]: I0123 07:37:15.948124 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" podUID="2e269fdb-0502-4d62-9a0d-15094fdd942c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": dial tcp 10.217.0.96:8081: connect: connection refused" Jan 23 07:37:16 crc kubenswrapper[4784]: I0123 07:37:16.056878 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" podUID="d6c01b10-21b9-4e8b-b051-6f148f468828" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": dial tcp 10.217.0.93:8081: connect: connection refused" Jan 23 07:37:16 crc kubenswrapper[4784]: I0123 07:37:16.057012 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" Jan 23 07:37:16 crc kubenswrapper[4784]: I0123 07:37:16.058043 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" podUID="d6c01b10-21b9-4e8b-b051-6f148f468828" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": dial tcp 10.217.0.93:8081: connect: connection refused" Jan 23 07:37:16 crc kubenswrapper[4784]: I0123 07:37:16.152964 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" podUID="3b13bce8-a43d-4833-9472-81f048a95be3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": dial tcp 10.217.0.95:8081: connect: connection refused" Jan 23 07:37:18 crc kubenswrapper[4784]: I0123 07:37:18.183546 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" podUID="409eb30c-947e-4d15-9b7c-8a73ba35ad70" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/healthz\": dial tcp 10.217.0.97:8081: connect: connection refused" Jan 23 07:37:18 crc kubenswrapper[4784]: I0123 07:37:18.183719 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" podUID="409eb30c-947e-4d15-9b7c-8a73ba35ad70" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/readyz\": dial tcp 10.217.0.97:8081: connect: connection refused" Jan 23 07:37:18 crc kubenswrapper[4784]: I0123 07:37:18.184009 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 07:37:20 crc kubenswrapper[4784]: I0123 07:37:20.645160 4784 patch_prober.go:28] interesting pod/controller-manager-d5d5977cb-wwrnl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 23 07:37:20 crc kubenswrapper[4784]: I0123 07:37:20.646569 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" podUID="4cf2e2b0-09d4-411b-9a83-1b3b409368be" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 23 07:37:20 crc kubenswrapper[4784]: I0123 07:37:20.676847 4784 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 07:37:20 crc kubenswrapper[4784]: I0123 07:37:20.676928 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 07:37:20 crc kubenswrapper[4784]: I0123 07:37:20.676986 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:37:20 crc kubenswrapper[4784]: I0123 07:37:20.678166 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"2389be05b151eb26e0644f4d99c1c86fe04639bc4c03e4b8d6b51c7653c9c041"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed liveness probe, will be restarted" Jan 23 07:37:20 crc kubenswrapper[4784]: I0123 07:37:20.678363 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://2389be05b151eb26e0644f4d99c1c86fe04639bc4c03e4b8d6b51c7653c9c041" gracePeriod=30 Jan 23 07:37:20 crc kubenswrapper[4784]: I0123 07:37:20.918882 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" podUID="758913f1-9ef1-4fe9-9d5f-2cb794fcddef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 23 07:37:20 crc kubenswrapper[4784]: I0123 07:37:20.919035 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 07:37:20 crc kubenswrapper[4784]: I0123 07:37:20.919058 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" podUID="758913f1-9ef1-4fe9-9d5f-2cb794fcddef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 23 07:37:20 crc kubenswrapper[4784]: I0123 07:37:20.919708 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" podUID="758913f1-9ef1-4fe9-9d5f-2cb794fcddef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 23 07:37:21 crc kubenswrapper[4784]: I0123 07:37:21.595459 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" podUID="500659da-123f-4500-9c50-2b7b3b7656df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 23 07:37:21 crc kubenswrapper[4784]: I0123 07:37:21.595478 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" podUID="500659da-123f-4500-9c50-2b7b3b7656df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 23 07:37:22 crc kubenswrapper[4784]: I0123 07:37:22.483400 4784 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 07:37:22 crc kubenswrapper[4784]: I0123 07:37:22.483463 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 07:37:22 crc kubenswrapper[4784]: I0123 07:37:22.685792 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" podUID="47ec951f-c0f2-40f8-9361-6ca608819c25" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.52:8080/readyz\": dial tcp 10.217.0.52:8080: connect: connection refused" Jan 23 07:37:22 crc kubenswrapper[4784]: I0123 07:37:22.685973 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 07:37:22 crc kubenswrapper[4784]: I0123 07:37:22.686788 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" podUID="47ec951f-c0f2-40f8-9361-6ca608819c25" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.52:8080/readyz\": dial tcp 10.217.0.52:8080: connect: connection refused" Jan 23 07:37:23 crc kubenswrapper[4784]: I0123 07:37:23.186367 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-58d484d7c8-xhg4d" podUID="7979f0da-f16f-4e2a-8c1c-a667607ddcf2" containerName="console" containerID="cri-o://643fce47d779e6847ab0e9078cd9cd3469ebdc0475c20eda75ba9e533e566b32" gracePeriod=15 Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.076638 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" podUID="0e01c35c-c9bd-4b02-adb1-be49a504ea54" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.077286 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.076668 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" podUID="0e01c35c-c9bd-4b02-adb1-be49a504ea54" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.078393 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" podUID="0e01c35c-c9bd-4b02-adb1-be49a504ea54" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.134738 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" podUID="55f3492a-a5c0-460b-a93b-eb680b426a7c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.134889 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.135782 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" podUID="55f3492a-a5c0-460b-a93b-eb680b426a7c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.145968 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" podUID="55f3492a-a5c0-460b-a93b-eb680b426a7c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.195172 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" podUID="f54aca80-78ad-4bda-905c-0a519a4f33ed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.195194 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" podUID="f54aca80-78ad-4bda-905c-0a519a4f33ed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.195278 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.195947 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" podUID="f54aca80-78ad-4bda-905c-0a519a4f33ed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.219115 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" podUID="417f228a-38b7-448a-980d-f64d6e113646" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.219235 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.219129 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" podUID="417f228a-38b7-448a-980d-f64d6e113646" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.219893 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" podUID="417f228a-38b7-448a-980d-f64d6e113646" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.238282 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" podUID="4fa12cd4-f2bc-4863-8b67-e246a0becee3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.238644 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" podUID="4fa12cd4-f2bc-4863-8b67-e246a0becee3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.238790 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.239648 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" podUID="4fa12cd4-f2bc-4863-8b67-e246a0becee3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.258930 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" podUID="be839066-996a-463b-b96c-a340d4e55ffd" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.266910 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.267514 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" podUID="be839066-996a-463b-b96c-a340d4e55ffd" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.289419 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" podUID="f809f5f2-7409-4d7e-b938-1efc34dc4c2f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.289431 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" podUID="f809f5f2-7409-4d7e-b938-1efc34dc4c2f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.289648 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.290073 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" podUID="f809f5f2-7409-4d7e-b938-1efc34dc4c2f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.351190 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" podUID="89f228f9-5c69-4e48-bf35-01cc25b56ecd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.351260 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.351698 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" podUID="89f228f9-5c69-4e48-bf35-01cc25b56ecd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.419496 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" podUID="7c5e978b-ac3c-439e-b2b1-ab025c130984" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": dial tcp 10.217.0.76:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.419598 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.419824 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" podUID="7c5e978b-ac3c-439e-b2b1-ab025c130984" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": dial tcp 10.217.0.76:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.420184 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" podUID="7c5e978b-ac3c-439e-b2b1-ab025c130984" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": dial tcp 10.217.0.76:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.486935 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" podUID="138e85ae-26a7-45f3-ac25-61ece9cf8573" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.487004 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" podUID="138e85ae-26a7-45f3-ac25-61ece9cf8573" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.487071 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.487605 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" podUID="138e85ae-26a7-45f3-ac25-61ece9cf8573" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.561082 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" podUID="1cd86a7e-7738-4a67-9c19-d34a70dbc9fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.561082 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" podUID="1cd86a7e-7738-4a67-9c19-d34a70dbc9fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.604655 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" podUID="be79eaa0-8040-4009-9f16-fcb56bffbff7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.604672 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" podUID="be79eaa0-8040-4009-9f16-fcb56bffbff7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.604885 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.605806 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" podUID="be79eaa0-8040-4009-9f16-fcb56bffbff7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.658581 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" podUID="9bc11b97-7610-4c0f-898a-bb42b42c37d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": dial tcp 10.217.0.87:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.658700 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.658581 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" podUID="9bc11b97-7610-4c0f-898a-bb42b42c37d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": dial tcp 10.217.0.87:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.659287 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" podUID="9bc11b97-7610-4c0f-898a-bb42b42c37d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": dial tcp 10.217.0.87:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.681551 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" podUID="89a376c8-b238-445d-99da-b85f3c421125" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.681646 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" podUID="89a376c8-b238-445d-99da-b85f3c421125" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.681681 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.720674 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" podUID="e8de3214-d1e9-4800-9ace-51a85b326df8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": dial tcp 10.217.0.90:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.720783 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.721829 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" podUID="e8de3214-d1e9-4800-9ace-51a85b326df8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": dial tcp 10.217.0.90:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.725835 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" podUID="e8de3214-d1e9-4800-9ace-51a85b326df8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": dial tcp 10.217.0.90:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.740602 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="4e84d3df-4011-472a-9b95-9ed21dea27d5" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.213:8081/readyz\": dial tcp 10.217.0.213:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.740699 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.759040 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" podUID="2c2a2d81-11ef-4146-ad50-8f7f39163253" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/healthz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.759083 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" podUID="2c2a2d81-11ef-4146-ad50-8f7f39163253" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.829406 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" podUID="3a006f0b-6298-4509-9533-178b38906875" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/healthz\": dial tcp 10.217.0.94:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.829887 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" podUID="3a006f0b-6298-4509-9533-178b38906875" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": dial tcp 10.217.0.94:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.945610 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" podUID="2e269fdb-0502-4d62-9a0d-15094fdd942c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": dial tcp 10.217.0.96:8081: connect: connection refused" Jan 23 07:37:25 crc kubenswrapper[4784]: I0123 07:37:25.945684 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" podUID="2e269fdb-0502-4d62-9a0d-15094fdd942c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/healthz\": dial tcp 10.217.0.96:8081: connect: connection refused" Jan 23 07:37:26 crc kubenswrapper[4784]: I0123 07:37:26.056109 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" podUID="d6c01b10-21b9-4e8b-b051-6f148f468828" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": dial tcp 10.217.0.93:8081: connect: connection refused" Jan 23 07:37:26 crc kubenswrapper[4784]: I0123 07:37:26.056109 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" podUID="d6c01b10-21b9-4e8b-b051-6f148f468828" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": dial tcp 10.217.0.93:8081: connect: connection refused" Jan 23 07:37:26 crc kubenswrapper[4784]: I0123 07:37:26.154058 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" podUID="3b13bce8-a43d-4833-9472-81f048a95be3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": dial tcp 10.217.0.95:8081: connect: connection refused" Jan 23 07:37:26 crc kubenswrapper[4784]: I0123 07:37:26.154349 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" podUID="3b13bce8-a43d-4833-9472-81f048a95be3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/healthz\": dial tcp 10.217.0.95:8081: connect: connection refused" Jan 23 07:37:26 crc kubenswrapper[4784]: I0123 07:37:26.154449 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" Jan 23 07:37:26 crc kubenswrapper[4784]: I0123 07:37:26.701331 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="263b6093-4133-4159-b83a-32199b46fa5d" containerName="ceilometer-notification-agent" probeResult="failure" output=< Jan 23 07:37:26 crc kubenswrapper[4784]: Unkown error: Expecting value: line 1 column 1 (char 0) Jan 23 07:37:26 crc kubenswrapper[4784]: > Jan 23 07:37:26 crc kubenswrapper[4784]: I0123 07:37:26.701664 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 23 07:37:28 crc kubenswrapper[4784]: I0123 07:37:28.107240 4784 patch_prober.go:28] interesting pod/console-58d484d7c8-xhg4d container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/health\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Jan 23 07:37:28 crc kubenswrapper[4784]: I0123 07:37:28.107654 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-58d484d7c8-xhg4d" podUID="7979f0da-f16f-4e2a-8c1c-a667607ddcf2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.50:8443/health\": dial tcp 10.217.0.50:8443: connect: connection refused" Jan 23 07:37:28 crc kubenswrapper[4784]: I0123 07:37:28.182527 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" podUID="409eb30c-947e-4d15-9b7c-8a73ba35ad70" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/readyz\": dial tcp 10.217.0.97:8081: connect: connection refused" Jan 23 07:37:30 crc kubenswrapper[4784]: I0123 07:37:30.644876 4784 patch_prober.go:28] interesting pod/controller-manager-d5d5977cb-wwrnl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 23 07:37:30 crc kubenswrapper[4784]: I0123 07:37:30.645344 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" podUID="4cf2e2b0-09d4-411b-9a83-1b3b409368be" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 23 07:37:30 crc kubenswrapper[4784]: I0123 07:37:30.918313 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" podUID="758913f1-9ef1-4fe9-9d5f-2cb794fcddef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 23 07:37:31 crc kubenswrapper[4784]: I0123 07:37:31.595961 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" podUID="500659da-123f-4500-9c50-2b7b3b7656df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": dial tcp 10.217.0.89:8081: connect: connection refused" Jan 23 07:37:32 crc kubenswrapper[4784]: I0123 07:37:32.483834 4784 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 07:37:32 crc kubenswrapper[4784]: I0123 07:37:32.484264 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 07:37:32 crc kubenswrapper[4784]: I0123 07:37:32.484345 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:37:32 crc kubenswrapper[4784]: I0123 07:37:32.686386 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" podUID="47ec951f-c0f2-40f8-9361-6ca608819c25" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.52:8080/readyz\": dial tcp 10.217.0.52:8080: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.076062 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" podUID="0e01c35c-c9bd-4b02-adb1-be49a504ea54" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.134287 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" podUID="55f3492a-a5c0-460b-a93b-eb680b426a7c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.194361 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" podUID="f54aca80-78ad-4bda-905c-0a519a4f33ed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.220634 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" podUID="417f228a-38b7-448a-980d-f64d6e113646" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.237183 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" podUID="4fa12cd4-f2bc-4863-8b67-e246a0becee3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.259841 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" podUID="be839066-996a-463b-b96c-a340d4e55ffd" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.259991 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" podUID="be839066-996a-463b-b96c-a340d4e55ffd" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.289144 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" podUID="f809f5f2-7409-4d7e-b938-1efc34dc4c2f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.351673 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" podUID="89f228f9-5c69-4e48-bf35-01cc25b56ecd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.421092 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" podUID="7c5e978b-ac3c-439e-b2b1-ab025c130984" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": dial tcp 10.217.0.76:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.486984 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" podUID="138e85ae-26a7-45f3-ac25-61ece9cf8573" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.560983 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" podUID="1cd86a7e-7738-4a67-9c19-d34a70dbc9fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.605721 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" podUID="be79eaa0-8040-4009-9f16-fcb56bffbff7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.658047 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" podUID="9bc11b97-7610-4c0f-898a-bb42b42c37d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": dial tcp 10.217.0.87:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.679195 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" podUID="89a376c8-b238-445d-99da-b85f3c421125" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.719029 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" podUID="e8de3214-d1e9-4800-9ace-51a85b326df8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": dial tcp 10.217.0.90:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.741509 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="4e84d3df-4011-472a-9b95-9ed21dea27d5" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.213:8081/readyz\": dial tcp 10.217.0.213:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.760019 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" podUID="2c2a2d81-11ef-4146-ad50-8f7f39163253" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.831003 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" podUID="3a006f0b-6298-4509-9533-178b38906875" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": dial tcp 10.217.0.94:8081: connect: connection refused" Jan 23 07:37:35 crc kubenswrapper[4784]: I0123 07:37:35.953845 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" podUID="2e269fdb-0502-4d62-9a0d-15094fdd942c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": dial tcp 10.217.0.96:8081: connect: connection refused" Jan 23 07:37:36 crc kubenswrapper[4784]: I0123 07:37:36.070855 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" podUID="d6c01b10-21b9-4e8b-b051-6f148f468828" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": dial tcp 10.217.0.93:8081: connect: connection refused" Jan 23 07:37:36 crc kubenswrapper[4784]: I0123 07:37:36.154299 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" podUID="3b13bce8-a43d-4833-9472-81f048a95be3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": dial tcp 10.217.0.95:8081: connect: connection refused" Jan 23 07:37:37 crc kubenswrapper[4784]: I0123 07:37:37.346533 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-d5d5977cb-wwrnl_4cf2e2b0-09d4-411b-9a83-1b3b409368be/controller-manager/0.log" Jan 23 07:37:37 crc kubenswrapper[4784]: I0123 07:37:37.347027 4784 generic.go:334] "Generic (PLEG): container finished" podID="4cf2e2b0-09d4-411b-9a83-1b3b409368be" containerID="8625052bb0955bc87f45ad4e831f7c7578bb8a47477a61db8467c2de14abb02a" exitCode=-1 Jan 23 07:37:37 crc kubenswrapper[4784]: I0123 07:37:37.348998 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"4e60b62e73612f0b4b0f3e797e97f96da637371640a2e888d58c03fa29334e40"} pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" containerMessage="Container manager failed liveness probe, will be restarted" Jan 23 07:37:37 crc kubenswrapper[4784]: I0123 07:37:37.349042 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" podUID="89f228f9-5c69-4e48-bf35-01cc25b56ecd" containerName="manager" containerID="cri-o://4e60b62e73612f0b4b0f3e797e97f96da637371640a2e888d58c03fa29334e40" gracePeriod=10 Jan 23 07:37:37 crc kubenswrapper[4784]: I0123 07:37:37.349101 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" event={"ID":"4cf2e2b0-09d4-411b-9a83-1b3b409368be","Type":"ContainerDied","Data":"8625052bb0955bc87f45ad4e831f7c7578bb8a47477a61db8467c2de14abb02a"} Jan 23 07:37:37 crc kubenswrapper[4784]: I0123 07:37:37.349478 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"231a51d922328c63d8bc60339ff99a10fea2fe1633661ddd1c9a5790fba2865d"} pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" containerMessage="Container manager failed liveness probe, will be restarted" Jan 23 07:37:37 crc kubenswrapper[4784]: I0123 07:37:37.349531 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" podUID="89a376c8-b238-445d-99da-b85f3c421125" containerName="manager" containerID="cri-o://231a51d922328c63d8bc60339ff99a10fea2fe1633661ddd1c9a5790fba2865d" gracePeriod=10 Jan 23 07:37:37 crc kubenswrapper[4784]: I0123 07:37:37.349986 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" podUID="2c2a2d81-11ef-4146-ad50-8f7f39163253" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": dial tcp 10.217.0.91:8081: connect: connection refused" Jan 23 07:37:37 crc kubenswrapper[4784]: I0123 07:37:37.350049 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" podUID="89a376c8-b238-445d-99da-b85f3c421125" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": dial tcp 10.217.0.88:8081: connect: connection refused" Jan 23 07:37:37 crc kubenswrapper[4784]: I0123 07:37:37.350211 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" podUID="89f228f9-5c69-4e48-bf35-01cc25b56ecd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 23 07:37:37 crc kubenswrapper[4784]: I0123 07:37:37.350592 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:37:37 crc kubenswrapper[4784]: I0123 07:37:37.350614 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"47e967c4505e5f77dc00c441db0a43b7965e7277571396e99e67a50335b2212a"} pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" containerMessage="Container manager failed liveness probe, will be restarted" Jan 23 07:37:37 crc kubenswrapper[4784]: I0123 07:37:37.350655 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" podUID="3b13bce8-a43d-4833-9472-81f048a95be3" containerName="manager" containerID="cri-o://47e967c4505e5f77dc00c441db0a43b7965e7277571396e99e67a50335b2212a" gracePeriod=10 Jan 23 07:37:37 crc kubenswrapper[4784]: I0123 07:37:37.351307 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" podUID="3b13bce8-a43d-4833-9472-81f048a95be3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": dial tcp 10.217.0.95:8081: connect: connection refused" Jan 23 07:37:37 crc kubenswrapper[4784]: I0123 07:37:37.354465 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" podUID="409eb30c-947e-4d15-9b7c-8a73ba35ad70" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/readyz\": dial tcp 10.217.0.97:8081: connect: connection refused" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.106901 4784 patch_prober.go:28] interesting pod/console-58d484d7c8-xhg4d container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/health\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.107304 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-58d484d7c8-xhg4d" podUID="7979f0da-f16f-4e2a-8c1c-a667607ddcf2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.50:8443/health\": dial tcp 10.217.0.50:8443: connect: connection refused" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.182599 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" podUID="409eb30c-947e-4d15-9b7c-8a73ba35ad70" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/healthz\": dial tcp 10.217.0.97:8081: connect: connection refused" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.182677 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.183251 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" podUID="409eb30c-947e-4d15-9b7c-8a73ba35ad70" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/readyz\": dial tcp 10.217.0.97:8081: connect: connection refused" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.361719 4784 generic.go:334] "Generic (PLEG): container finished" podID="55f3492a-a5c0-460b-a93b-eb680b426a7c" containerID="54b37f324336b7ac491e75c944c2d2a1b1aeae29cacb998104697892098a73b1" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.361838 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" event={"ID":"55f3492a-a5c0-460b-a93b-eb680b426a7c","Type":"ContainerDied","Data":"54b37f324336b7ac491e75c944c2d2a1b1aeae29cacb998104697892098a73b1"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.363391 4784 scope.go:117] "RemoveContainer" containerID="54b37f324336b7ac491e75c944c2d2a1b1aeae29cacb998104697892098a73b1" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.364613 4784 generic.go:334] "Generic (PLEG): container finished" podID="3b13bce8-a43d-4833-9472-81f048a95be3" containerID="47e967c4505e5f77dc00c441db0a43b7965e7277571396e99e67a50335b2212a" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.364704 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" event={"ID":"3b13bce8-a43d-4833-9472-81f048a95be3","Type":"ContainerDied","Data":"47e967c4505e5f77dc00c441db0a43b7965e7277571396e99e67a50335b2212a"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.370613 4784 generic.go:334] "Generic (PLEG): container finished" podID="1cd86a7e-7738-4a67-9c19-d34a70dbc9fe" containerID="010578990e3d232d1556d179c8a0b01827db5788554a75996915119465720917" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.370708 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" event={"ID":"1cd86a7e-7738-4a67-9c19-d34a70dbc9fe","Type":"ContainerDied","Data":"010578990e3d232d1556d179c8a0b01827db5788554a75996915119465720917"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.371641 4784 scope.go:117] "RemoveContainer" containerID="010578990e3d232d1556d179c8a0b01827db5788554a75996915119465720917" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.374676 4784 generic.go:334] "Generic (PLEG): container finished" podID="be839066-996a-463b-b96c-a340d4e55ffd" containerID="6e0e6fe8dc45648f8e69794732f52cff84e5bd78ccadb7a41c14572ae7e31bca" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.374776 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" event={"ID":"be839066-996a-463b-b96c-a340d4e55ffd","Type":"ContainerDied","Data":"6e0e6fe8dc45648f8e69794732f52cff84e5bd78ccadb7a41c14572ae7e31bca"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.375298 4784 scope.go:117] "RemoveContainer" containerID="6e0e6fe8dc45648f8e69794732f52cff84e5bd78ccadb7a41c14572ae7e31bca" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.378831 4784 generic.go:334] "Generic (PLEG): container finished" podID="758913f1-9ef1-4fe9-9d5f-2cb794fcddef" containerID="6bc63c5a95df5062eef621931ad51b9f861a1cdf4cc0a8573bcd603f12ba4f88" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.378928 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" event={"ID":"758913f1-9ef1-4fe9-9d5f-2cb794fcddef","Type":"ContainerDied","Data":"6bc63c5a95df5062eef621931ad51b9f861a1cdf4cc0a8573bcd603f12ba4f88"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.379706 4784 scope.go:117] "RemoveContainer" containerID="6bc63c5a95df5062eef621931ad51b9f861a1cdf4cc0a8573bcd603f12ba4f88" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.394474 4784 generic.go:334] "Generic (PLEG): container finished" podID="80f7466e-7d6a-4416-9259-c30d69ee725e" containerID="bf524284fcbb2f0d69563d484d7060081a17dfdda94c642ca7132ebd30698d21" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.394586 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5dsxt" event={"ID":"80f7466e-7d6a-4416-9259-c30d69ee725e","Type":"ContainerDied","Data":"bf524284fcbb2f0d69563d484d7060081a17dfdda94c642ca7132ebd30698d21"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.395456 4784 scope.go:117] "RemoveContainer" containerID="bf524284fcbb2f0d69563d484d7060081a17dfdda94c642ca7132ebd30698d21" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.397105 4784 generic.go:334] "Generic (PLEG): container finished" podID="409eb30c-947e-4d15-9b7c-8a73ba35ad70" containerID="0e285391e7b5121c93ace6e1df4cade3a5671af276b059c3be23471db1ac82d1" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.397164 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" event={"ID":"409eb30c-947e-4d15-9b7c-8a73ba35ad70","Type":"ContainerDied","Data":"0e285391e7b5121c93ace6e1df4cade3a5671af276b059c3be23471db1ac82d1"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.398322 4784 scope.go:117] "RemoveContainer" containerID="0e285391e7b5121c93ace6e1df4cade3a5671af276b059c3be23471db1ac82d1" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.406366 4784 generic.go:334] "Generic (PLEG): container finished" podID="89a376c8-b238-445d-99da-b85f3c421125" containerID="231a51d922328c63d8bc60339ff99a10fea2fe1633661ddd1c9a5790fba2865d" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.406812 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" event={"ID":"89a376c8-b238-445d-99da-b85f3c421125","Type":"ContainerDied","Data":"231a51d922328c63d8bc60339ff99a10fea2fe1633661ddd1c9a5790fba2865d"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.417947 4784 generic.go:334] "Generic (PLEG): container finished" podID="d6c01b10-21b9-4e8b-b051-6f148f468828" containerID="51a246ecb1c8341ee417386a01de941615ba8b8871c70bd1e1ddedba0b5957e6" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.418032 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" event={"ID":"d6c01b10-21b9-4e8b-b051-6f148f468828","Type":"ContainerDied","Data":"51a246ecb1c8341ee417386a01de941615ba8b8871c70bd1e1ddedba0b5957e6"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.419233 4784 scope.go:117] "RemoveContainer" containerID="51a246ecb1c8341ee417386a01de941615ba8b8871c70bd1e1ddedba0b5957e6" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.440657 4784 generic.go:334] "Generic (PLEG): container finished" podID="be79eaa0-8040-4009-9f16-fcb56bffbff7" containerID="4fa2741719fe97418ce26b948bb1ade7fae773db04484e5c7a5da4e8d164b214" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.440722 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" event={"ID":"be79eaa0-8040-4009-9f16-fcb56bffbff7","Type":"ContainerDied","Data":"4fa2741719fe97418ce26b948bb1ade7fae773db04484e5c7a5da4e8d164b214"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.441249 4784 scope.go:117] "RemoveContainer" containerID="4fa2741719fe97418ce26b948bb1ade7fae773db04484e5c7a5da4e8d164b214" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.451850 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" event={"ID":"4fa12cd4-f2bc-4863-8b67-e246a0becee3","Type":"ContainerDied","Data":"7139c56d52233ed27f622be20e5ff8eadeff94c5d1740b50a0eed343b6837d4b"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.451740 4784 generic.go:334] "Generic (PLEG): container finished" podID="4fa12cd4-f2bc-4863-8b67-e246a0becee3" containerID="7139c56d52233ed27f622be20e5ff8eadeff94c5d1740b50a0eed343b6837d4b" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.452847 4784 scope.go:117] "RemoveContainer" containerID="7139c56d52233ed27f622be20e5ff8eadeff94c5d1740b50a0eed343b6837d4b" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.455564 4784 generic.go:334] "Generic (PLEG): container finished" podID="f54aca80-78ad-4bda-905c-0a519a4f33ed" containerID="e6e8c4861c90543a850ebe93c67b64f50be474672f17a8e932a763a05aafc7fe" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.455653 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" event={"ID":"f54aca80-78ad-4bda-905c-0a519a4f33ed","Type":"ContainerDied","Data":"e6e8c4861c90543a850ebe93c67b64f50be474672f17a8e932a763a05aafc7fe"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.456561 4784 scope.go:117] "RemoveContainer" containerID="e6e8c4861c90543a850ebe93c67b64f50be474672f17a8e932a763a05aafc7fe" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.460977 4784 generic.go:334] "Generic (PLEG): container finished" podID="e8de3214-d1e9-4800-9ace-51a85b326df8" containerID="f838fee77ce3b52571a7392e8983db4d7e7fccab5228acbdaf463e00dddec160" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.461026 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" event={"ID":"e8de3214-d1e9-4800-9ace-51a85b326df8","Type":"ContainerDied","Data":"f838fee77ce3b52571a7392e8983db4d7e7fccab5228acbdaf463e00dddec160"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.461376 4784 scope.go:117] "RemoveContainer" containerID="f838fee77ce3b52571a7392e8983db4d7e7fccab5228acbdaf463e00dddec160" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.463538 4784 generic.go:334] "Generic (PLEG): container finished" podID="500659da-123f-4500-9c50-2b7b3b7656df" containerID="8c6e2f30c1086bdc04717bd26328e8f82831d5959fc07d4565ef0b555f515224" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.463581 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" event={"ID":"500659da-123f-4500-9c50-2b7b3b7656df","Type":"ContainerDied","Data":"8c6e2f30c1086bdc04717bd26328e8f82831d5959fc07d4565ef0b555f515224"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.463923 4784 scope.go:117] "RemoveContainer" containerID="8c6e2f30c1086bdc04717bd26328e8f82831d5959fc07d4565ef0b555f515224" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.465465 4784 generic.go:334] "Generic (PLEG): container finished" podID="f809f5f2-7409-4d7e-b938-1efc34dc4c2f" containerID="40921e91bec82063e3258fabd6962b4c9c841fff38d04a83665a3089915ee6ac" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.465519 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" event={"ID":"f809f5f2-7409-4d7e-b938-1efc34dc4c2f","Type":"ContainerDied","Data":"40921e91bec82063e3258fabd6962b4c9c841fff38d04a83665a3089915ee6ac"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.465923 4784 scope.go:117] "RemoveContainer" containerID="40921e91bec82063e3258fabd6962b4c9c841fff38d04a83665a3089915ee6ac" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.467676 4784 generic.go:334] "Generic (PLEG): container finished" podID="2c2a2d81-11ef-4146-ad50-8f7f39163253" containerID="63476a1e46ed7d85893d95bd90150350f90fe52a45e93b52573e5f9647778b97" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.467728 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" event={"ID":"2c2a2d81-11ef-4146-ad50-8f7f39163253","Type":"ContainerDied","Data":"63476a1e46ed7d85893d95bd90150350f90fe52a45e93b52573e5f9647778b97"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.469272 4784 scope.go:117] "RemoveContainer" containerID="63476a1e46ed7d85893d95bd90150350f90fe52a45e93b52573e5f9647778b97" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.474227 4784 generic.go:334] "Generic (PLEG): container finished" podID="417f228a-38b7-448a-980d-f64d6e113646" containerID="82a2acd77c9163fe773d7977bc7ceca847ea4aa15204dcde00d5bd41f97bd9b2" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.474307 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" event={"ID":"417f228a-38b7-448a-980d-f64d6e113646","Type":"ContainerDied","Data":"82a2acd77c9163fe773d7977bc7ceca847ea4aa15204dcde00d5bd41f97bd9b2"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.475135 4784 scope.go:117] "RemoveContainer" containerID="82a2acd77c9163fe773d7977bc7ceca847ea4aa15204dcde00d5bd41f97bd9b2" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.476987 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.484915 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.485004 4784 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2389be05b151eb26e0644f4d99c1c86fe04639bc4c03e4b8d6b51c7653c9c041" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.485162 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2389be05b151eb26e0644f4d99c1c86fe04639bc4c03e4b8d6b51c7653c9c041"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.485225 4784 scope.go:117] "RemoveContainer" containerID="5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.489228 4784 generic.go:334] "Generic (PLEG): container finished" podID="89f228f9-5c69-4e48-bf35-01cc25b56ecd" containerID="4e60b62e73612f0b4b0f3e797e97f96da637371640a2e888d58c03fa29334e40" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.489353 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" event={"ID":"89f228f9-5c69-4e48-bf35-01cc25b56ecd","Type":"ContainerDied","Data":"4e60b62e73612f0b4b0f3e797e97f96da637371640a2e888d58c03fa29334e40"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.492947 4784 generic.go:334] "Generic (PLEG): container finished" podID="2e269fdb-0502-4d62-9a0d-15094fdd942c" containerID="e26ed587ce5e460c34b86e7bc1b7c483ac8dd85729342b00c27b2af5c30c783f" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.493007 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" event={"ID":"2e269fdb-0502-4d62-9a0d-15094fdd942c","Type":"ContainerDied","Data":"e26ed587ce5e460c34b86e7bc1b7c483ac8dd85729342b00c27b2af5c30c783f"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.495320 4784 scope.go:117] "RemoveContainer" containerID="e26ed587ce5e460c34b86e7bc1b7c483ac8dd85729342b00c27b2af5c30c783f" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.499079 4784 generic.go:334] "Generic (PLEG): container finished" podID="0e01c35c-c9bd-4b02-adb1-be49a504ea54" containerID="c42fb7642df77acbb56e5bf38601b38c957c672b5960dbdaa7e280802952af9d" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.499161 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" event={"ID":"0e01c35c-c9bd-4b02-adb1-be49a504ea54","Type":"ContainerDied","Data":"c42fb7642df77acbb56e5bf38601b38c957c672b5960dbdaa7e280802952af9d"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.499622 4784 scope.go:117] "RemoveContainer" containerID="c42fb7642df77acbb56e5bf38601b38c957c672b5960dbdaa7e280802952af9d" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.501469 4784 generic.go:334] "Generic (PLEG): container finished" podID="9bc11b97-7610-4c0f-898a-bb42b42c37d7" containerID="a30829bfcc203f2f5d8700e2a1835d1a049a72e098ca487ed3379131f7e283f8" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.501529 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" event={"ID":"9bc11b97-7610-4c0f-898a-bb42b42c37d7","Type":"ContainerDied","Data":"a30829bfcc203f2f5d8700e2a1835d1a049a72e098ca487ed3379131f7e283f8"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.501931 4784 scope.go:117] "RemoveContainer" containerID="a30829bfcc203f2f5d8700e2a1835d1a049a72e098ca487ed3379131f7e283f8" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.504078 4784 generic.go:334] "Generic (PLEG): container finished" podID="263b6093-4133-4159-b83a-32199b46fa5d" containerID="b32fc5b188b9c182a1519e7a319ffd0e2844c19457eea65e0751cd078b0e6c10" exitCode=137 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.504133 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"263b6093-4133-4159-b83a-32199b46fa5d","Type":"ContainerDied","Data":"b32fc5b188b9c182a1519e7a319ffd0e2844c19457eea65e0751cd078b0e6c10"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.505828 4784 generic.go:334] "Generic (PLEG): container finished" podID="47ec951f-c0f2-40f8-9361-6ca608819c25" containerID="d220f20ef16a6c603cdfef64326ddf3dd1395757b1f55e7645e66f19ffe8b95c" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.505891 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" event={"ID":"47ec951f-c0f2-40f8-9361-6ca608819c25","Type":"ContainerDied","Data":"d220f20ef16a6c603cdfef64326ddf3dd1395757b1f55e7645e66f19ffe8b95c"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.506713 4784 scope.go:117] "RemoveContainer" containerID="d220f20ef16a6c603cdfef64326ddf3dd1395757b1f55e7645e66f19ffe8b95c" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.507227 4784 generic.go:334] "Generic (PLEG): container finished" podID="7c5e978b-ac3c-439e-b2b1-ab025c130984" containerID="3a967834cd9047ed7272022dd4d7fd1a01c5ba480bc1135d15d7d2c34db154d3" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.507268 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" event={"ID":"7c5e978b-ac3c-439e-b2b1-ab025c130984","Type":"ContainerDied","Data":"3a967834cd9047ed7272022dd4d7fd1a01c5ba480bc1135d15d7d2c34db154d3"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.507539 4784 scope.go:117] "RemoveContainer" containerID="3a967834cd9047ed7272022dd4d7fd1a01c5ba480bc1135d15d7d2c34db154d3" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.508809 4784 generic.go:334] "Generic (PLEG): container finished" podID="138e85ae-26a7-45f3-ac25-61ece9cf8573" containerID="fbbbdec92bb9a9df5c13215ed78f06ccb5173a53d50f37653faf4b0afd9c7104" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.508885 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" event={"ID":"138e85ae-26a7-45f3-ac25-61ece9cf8573","Type":"ContainerDied","Data":"fbbbdec92bb9a9df5c13215ed78f06ccb5173a53d50f37653faf4b0afd9c7104"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.509289 4784 scope.go:117] "RemoveContainer" containerID="fbbbdec92bb9a9df5c13215ed78f06ccb5173a53d50f37653faf4b0afd9c7104" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.510338 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-58d484d7c8-xhg4d_7979f0da-f16f-4e2a-8c1c-a667607ddcf2/console/0.log" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.510376 4784 generic.go:334] "Generic (PLEG): container finished" podID="7979f0da-f16f-4e2a-8c1c-a667607ddcf2" containerID="643fce47d779e6847ab0e9078cd9cd3469ebdc0475c20eda75ba9e533e566b32" exitCode=2 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.510425 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d484d7c8-xhg4d" event={"ID":"7979f0da-f16f-4e2a-8c1c-a667607ddcf2","Type":"ContainerDied","Data":"643fce47d779e6847ab0e9078cd9cd3469ebdc0475c20eda75ba9e533e566b32"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.512350 4784 generic.go:334] "Generic (PLEG): container finished" podID="3a006f0b-6298-4509-9533-178b38906875" containerID="72b0b8f30d9bec96529424e5f4f5fa35573a8e83f1ef99cd635f8e254ad11030" exitCode=1 Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.512804 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:37:38 crc kubenswrapper[4784]: E0123 07:37:38.513083 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.513257 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" event={"ID":"3a006f0b-6298-4509-9533-178b38906875","Type":"ContainerDied","Data":"72b0b8f30d9bec96529424e5f4f5fa35573a8e83f1ef99cd635f8e254ad11030"} Jan 23 07:37:38 crc kubenswrapper[4784]: I0123 07:37:38.513602 4784 scope.go:117] "RemoveContainer" containerID="72b0b8f30d9bec96529424e5f4f5fa35573a8e83f1ef99cd635f8e254ad11030" Jan 23 07:37:39 crc kubenswrapper[4784]: I0123 07:37:39.546373 4784 scope.go:117] "RemoveContainer" containerID="5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c" Jan 23 07:37:40 crc kubenswrapper[4784]: I0123 07:37:40.546247 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" event={"ID":"89a376c8-b238-445d-99da-b85f3c421125","Type":"ContainerStarted","Data":"f8e1128ec7de8aa903632efb0ff02d08da324fb5eebe36dd1ede36d9f2e9b843"} Jan 23 07:37:40 crc kubenswrapper[4784]: I0123 07:37:40.548672 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" event={"ID":"55f3492a-a5c0-460b-a93b-eb680b426a7c","Type":"ContainerStarted","Data":"ae9f7f15188d8a1c27ee81645d0703c2f033d169185fa3b1e420b3d986d125bf"} Jan 23 07:37:40 crc kubenswrapper[4784]: I0123 07:37:40.646708 4784 patch_prober.go:28] interesting pod/controller-manager-d5d5977cb-wwrnl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 23 07:37:40 crc kubenswrapper[4784]: I0123 07:37:40.646793 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" podUID="4cf2e2b0-09d4-411b-9a83-1b3b409368be" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 23 07:37:40 crc kubenswrapper[4784]: I0123 07:37:40.918274 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.558916 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"87ac961b-d41b-43ef-b55e-07b0cf093e56","Type":"ContainerStarted","Data":"adaa090c290205df0e3edd5d9aaed28fd6940496a62a14265bed3325df154a7d"} Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.561071 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.562108 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a8b7882adf1b9c35ea4d6d216a4653b044019d52d3f65014038988bbb24cc4df"} Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.565223 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" event={"ID":"be79eaa0-8040-4009-9f16-fcb56bffbff7","Type":"ContainerStarted","Data":"e653bc434654ffc35dd10744a72d6bc5601e294cbf4b92a8f47a6a6dfee7e7da"} Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.565837 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.568240 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" event={"ID":"758913f1-9ef1-4fe9-9d5f-2cb794fcddef","Type":"ContainerStarted","Data":"00be94c901ab10785f0b4735a0627e968510fabe7dce2ed82a2561df4e562859"} Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.570045 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5dsxt" event={"ID":"80f7466e-7d6a-4416-9259-c30d69ee725e","Type":"ContainerStarted","Data":"9444a5cabf26a15d3a838a45a3bffd4a4cf9c5e605bddfadacd4ccb0039db6b7"} Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.571972 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-58d484d7c8-xhg4d_7979f0da-f16f-4e2a-8c1c-a667607ddcf2/console/0.log" Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.572106 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d484d7c8-xhg4d" event={"ID":"7979f0da-f16f-4e2a-8c1c-a667607ddcf2","Type":"ContainerStarted","Data":"c89d1ba0689f99983890b4b21912e869948b8031d3ae75c8ce697d70afe00dfc"} Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.573768 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" event={"ID":"3b13bce8-a43d-4833-9472-81f048a95be3","Type":"ContainerStarted","Data":"cea5eccc5ec2dd721cad0b6a033ddbe49af4ce7cfc843594731f4cf84cb035ab"} Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.573844 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.575158 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" event={"ID":"1cd86a7e-7738-4a67-9c19-d34a70dbc9fe","Type":"ContainerStarted","Data":"8b42e277a5a5cca79643c332bb51693858ffb193caac72aadcd0ee1a5737c6ea"} Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.575710 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.576732 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" event={"ID":"4cf2e2b0-09d4-411b-9a83-1b3b409368be","Type":"ContainerStarted","Data":"847864b9739a257f7e69c2f1a6ac6fc5affd7d269535ac3bc3a2ce17edaf7da6"} Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.576957 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.578225 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" event={"ID":"89f228f9-5c69-4e48-bf35-01cc25b56ecd","Type":"ContainerStarted","Data":"5a4d7f0a6d5b828f38ec0be9c3bacd197519d267205b568b97b46194cfea21da"} Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.578391 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.578502 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.579796 4784 status_manager.go:317] "Container readiness changed for unknown container" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" containerID="cri-o://231a51d922328c63d8bc60339ff99a10fea2fe1633661ddd1c9a5790fba2865d" Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.579851 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.581333 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d5d5977cb-wwrnl" Jan 23 07:37:41 crc kubenswrapper[4784]: I0123 07:37:41.594477 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 07:37:42 crc kubenswrapper[4784]: E0123 07:37:42.169921 4784 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-controller-manager_kube-controller-manager-crc_openshift-kube-controller-manager_f614b9022728cf315e60c057852e563e_0 in pod sandbox b8ec1c8522ab6ff1f6a369fbaac2d88fff7f58d1726b51e2a9aa21290a76de0e: identifier is not a container" containerID="5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c" Jan 23 07:37:42 crc kubenswrapper[4784]: I0123 07:37:42.170332 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f06a0c9117783b1af153c6a1a5d4e455c39a28815901d8224b49a52c1d0da3c"} err="rpc error: code = Unknown desc = failed to delete container k8s_kube-controller-manager_kube-controller-manager-crc_openshift-kube-controller-manager_f614b9022728cf315e60c057852e563e_0 in pod sandbox b8ec1c8522ab6ff1f6a369fbaac2d88fff7f58d1726b51e2a9aa21290a76de0e: identifier is not a container" Jan 23 07:37:42 crc kubenswrapper[4784]: I0123 07:37:42.482983 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:37:42 crc kubenswrapper[4784]: I0123 07:37:42.603866 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 23 07:37:42 crc kubenswrapper[4784]: I0123 07:37:42.631714 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" event={"ID":"2c2a2d81-11ef-4146-ad50-8f7f39163253","Type":"ContainerStarted","Data":"6f71fc73ee5056b7e975c96f599733d971c163d0ea9491954ddb201c8fce8035"} Jan 23 07:37:42 crc kubenswrapper[4784]: I0123 07:37:42.636074 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" Jan 23 07:37:42 crc kubenswrapper[4784]: I0123 07:37:42.754796 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.644123 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" event={"ID":"409eb30c-947e-4d15-9b7c-8a73ba35ad70","Type":"ContainerStarted","Data":"590edb67953c6b7da4d68f0acc8075848432786fc0c30b6567205e9b732a2bb3"} Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.644634 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.647073 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" event={"ID":"0e01c35c-c9bd-4b02-adb1-be49a504ea54","Type":"ContainerStarted","Data":"02d02f640e25716c9f3c125f32037fe628a70d53c9e6e0f27983c9f1e1748faf"} Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.647312 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.650526 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" event={"ID":"47ec951f-c0f2-40f8-9361-6ca608819c25","Type":"ContainerStarted","Data":"5e3f98c61d78328dcd9b1b2ca3d796076f31c3f16fad958d9d600bccaee571ad"} Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.650998 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.653555 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" event={"ID":"500659da-123f-4500-9c50-2b7b3b7656df","Type":"ContainerStarted","Data":"7263422ede563ce7be034608bb8a1e0eb38d3399a242d57d24aa49b9fdb9a749"} Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.653721 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.655558 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" event={"ID":"f809f5f2-7409-4d7e-b938-1efc34dc4c2f","Type":"ContainerStarted","Data":"f0f4e53ead820372b34e7cb153f37308b683d7091f99c13b9c54e18253d4fa00"} Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.656292 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.658072 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" event={"ID":"138e85ae-26a7-45f3-ac25-61ece9cf8573","Type":"ContainerStarted","Data":"c79b2d34beaa6514955ac6a2999321de41f0c35346288028a3bd7a145ac675c3"} Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.658571 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.671591 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" event={"ID":"d6c01b10-21b9-4e8b-b051-6f148f468828","Type":"ContainerStarted","Data":"822d713638653d7b3a01d1e1e9922714c8ca0e13dcfa37fd62e846814cd0cc23"} Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.672568 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.696036 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" event={"ID":"2e269fdb-0502-4d62-9a0d-15094fdd942c","Type":"ContainerStarted","Data":"0a09230f493b523bac35df25b9c98f22b39607717d74016c99379e5563ee4957"} Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.696972 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.706177 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" event={"ID":"be839066-996a-463b-b96c-a340d4e55ffd","Type":"ContainerStarted","Data":"e14cef986842099a7fbab0ae2bda91f6123fd4f8ca8c6e1f0dc9b438a1c145df"} Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.707118 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.728986 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" event={"ID":"f54aca80-78ad-4bda-905c-0a519a4f33ed","Type":"ContainerStarted","Data":"96c82b4e7638c4b011458edf4b6ee0a2f1eb80d98f11198d9c99b3053e7ad494"} Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.729219 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.742245 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" event={"ID":"e8de3214-d1e9-4800-9ace-51a85b326df8","Type":"ContainerStarted","Data":"2c4bff56e8d8ccedd7dde441ca567c2f5482da097663f3074a436a9eb99d695d"} Jan 23 07:37:43 crc kubenswrapper[4784]: I0123 07:37:43.743152 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 07:37:44 crc kubenswrapper[4784]: I0123 07:37:44.751460 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" event={"ID":"7c5e978b-ac3c-439e-b2b1-ab025c130984","Type":"ContainerStarted","Data":"8bfc7d2864edbcf3b9ac73a65b98d1e2f7540686fd73814619fa7a426b47ccb5"} Jan 23 07:37:44 crc kubenswrapper[4784]: I0123 07:37:44.751778 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" Jan 23 07:37:44 crc kubenswrapper[4784]: I0123 07:37:44.753207 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" event={"ID":"9bc11b97-7610-4c0f-898a-bb42b42c37d7","Type":"ContainerStarted","Data":"33c740bed64a6c21ad01e1728ed09982dad67f93c16bc9fcbf07ecac9c91e6a9"} Jan 23 07:37:44 crc kubenswrapper[4784]: I0123 07:37:44.753907 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" Jan 23 07:37:44 crc kubenswrapper[4784]: I0123 07:37:44.792540 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"263b6093-4133-4159-b83a-32199b46fa5d","Type":"ContainerStarted","Data":"f799c52a7ecf1447119bea6d4845f3b1be6b659df78ec67258b2773cdf1a7eb9"} Jan 23 07:37:44 crc kubenswrapper[4784]: I0123 07:37:44.793473 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-notification-agent" containerStatusID={"Type":"cri-o","ID":"cd76c870fe40847de09e69d902853b6ac7f531bbf3e6b40751980f83fccc41ae"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-notification-agent failed liveness probe, will be restarted" Jan 23 07:37:44 crc kubenswrapper[4784]: I0123 07:37:44.793552 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="263b6093-4133-4159-b83a-32199b46fa5d" containerName="ceilometer-notification-agent" containerID="cri-o://cd76c870fe40847de09e69d902853b6ac7f531bbf3e6b40751980f83fccc41ae" gracePeriod=30 Jan 23 07:37:44 crc kubenswrapper[4784]: I0123 07:37:44.800419 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" event={"ID":"4fa12cd4-f2bc-4863-8b67-e246a0becee3","Type":"ContainerStarted","Data":"ca02998adf1af40a27c916a49d6e470d7e48641fe498edda4712fc1d92cfc647"} Jan 23 07:37:44 crc kubenswrapper[4784]: I0123 07:37:44.800678 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" Jan 23 07:37:44 crc kubenswrapper[4784]: I0123 07:37:44.808064 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4e84d3df-4011-472a-9b95-9ed21dea27d5","Type":"ContainerStarted","Data":"d079edc73f2b4fcdfd746dbe5aed481b3aa2db770c1dcb9ae7c9f74c28abd9c5"} Jan 23 07:37:44 crc kubenswrapper[4784]: I0123 07:37:44.809328 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 07:37:44 crc kubenswrapper[4784]: I0123 07:37:44.811648 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" event={"ID":"417f228a-38b7-448a-980d-f64d6e113646","Type":"ContainerStarted","Data":"15412bae0ad04a2a00cbb6a8e04ddbb0e6461cb457ce2f6bc0e27ca9c13300a5"} Jan 23 07:37:44 crc kubenswrapper[4784]: I0123 07:37:44.811926 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" Jan 23 07:37:44 crc kubenswrapper[4784]: I0123 07:37:44.813932 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" event={"ID":"3a006f0b-6298-4509-9533-178b38906875","Type":"ContainerStarted","Data":"f5fa08ad9c1a977996eaccd61adb19752c11a79ab6c3f0ab433c1b9af27341a8"} Jan 23 07:37:45 crc kubenswrapper[4784]: I0123 07:37:45.136469 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-zkswk" Jan 23 07:37:45 crc kubenswrapper[4784]: I0123 07:37:45.357429 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-2vptn" Jan 23 07:37:45 crc kubenswrapper[4784]: I0123 07:37:45.563198 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-7znp2" Jan 23 07:37:45 crc kubenswrapper[4784]: I0123 07:37:45.679714 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-jqhrt" Jan 23 07:37:45 crc kubenswrapper[4784]: I0123 07:37:45.717360 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" Jan 23 07:37:45 crc kubenswrapper[4784]: I0123 07:37:45.758366 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" Jan 23 07:37:45 crc kubenswrapper[4784]: I0123 07:37:45.829090 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" Jan 23 07:37:46 crc kubenswrapper[4784]: I0123 07:37:46.154590 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-94f28" Jan 23 07:37:47 crc kubenswrapper[4784]: I0123 07:37:47.770901 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 07:37:47 crc kubenswrapper[4784]: I0123 07:37:47.845129 4784 generic.go:334] "Generic (PLEG): container finished" podID="263b6093-4133-4159-b83a-32199b46fa5d" containerID="cd76c870fe40847de09e69d902853b6ac7f531bbf3e6b40751980f83fccc41ae" exitCode=0 Jan 23 07:37:47 crc kubenswrapper[4784]: I0123 07:37:47.845193 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"263b6093-4133-4159-b83a-32199b46fa5d","Type":"ContainerDied","Data":"cd76c870fe40847de09e69d902853b6ac7f531bbf3e6b40751980f83fccc41ae"} Jan 23 07:37:48 crc kubenswrapper[4784]: I0123 07:37:48.106502 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 07:37:48 crc kubenswrapper[4784]: I0123 07:37:48.106814 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 07:37:48 crc kubenswrapper[4784]: I0123 07:37:48.110251 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 07:37:48 crc kubenswrapper[4784]: I0123 07:37:48.126975 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:37:48 crc kubenswrapper[4784]: I0123 07:37:48.130945 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:37:48 crc kubenswrapper[4784]: I0123 07:37:48.188300 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7cccd889d5-jxhkn" Jan 23 07:37:48 crc kubenswrapper[4784]: I0123 07:37:48.864774 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"263b6093-4133-4159-b83a-32199b46fa5d","Type":"ContainerStarted","Data":"6b634fd9c2636cf7481ea29f32266090357ea255a0ecee69817f4df5c274f179"} Jan 23 07:37:48 crc kubenswrapper[4784]: I0123 07:37:48.870434 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-58d484d7c8-xhg4d" Jan 23 07:37:50 crc kubenswrapper[4784]: I0123 07:37:50.255404 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:37:50 crc kubenswrapper[4784]: E0123 07:37:50.256490 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:37:50 crc kubenswrapper[4784]: I0123 07:37:50.926936 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-hl8gk" Jan 23 07:37:51 crc kubenswrapper[4784]: I0123 07:37:51.602362 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6" Jan 23 07:37:52 crc kubenswrapper[4784]: I0123 07:37:52.487487 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.269670 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xj7lj"] Jan 23 07:37:53 crc kubenswrapper[4784]: E0123 07:37:53.270135 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1639820c-834e-48f3-923e-9fba2f1ca0d8" containerName="extract-utilities" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.270165 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="1639820c-834e-48f3-923e-9fba2f1ca0d8" containerName="extract-utilities" Jan 23 07:37:53 crc kubenswrapper[4784]: E0123 07:37:53.270203 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1639820c-834e-48f3-923e-9fba2f1ca0d8" containerName="registry-server" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.270211 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="1639820c-834e-48f3-923e-9fba2f1ca0d8" containerName="registry-server" Jan 23 07:37:53 crc kubenswrapper[4784]: E0123 07:37:53.270288 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1639820c-834e-48f3-923e-9fba2f1ca0d8" containerName="extract-content" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.270297 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="1639820c-834e-48f3-923e-9fba2f1ca0d8" containerName="extract-content" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.270557 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="1639820c-834e-48f3-923e-9fba2f1ca0d8" containerName="registry-server" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.272505 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.293048 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj7lj"] Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.353397 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ae2914d-5681-40ed-92de-a50b09c1c1ba-utilities\") pod \"redhat-marketplace-xj7lj\" (UID: \"6ae2914d-5681-40ed-92de-a50b09c1c1ba\") " pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.353491 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ae2914d-5681-40ed-92de-a50b09c1c1ba-catalog-content\") pod \"redhat-marketplace-xj7lj\" (UID: \"6ae2914d-5681-40ed-92de-a50b09c1c1ba\") " pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.353546 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hqw6\" (UniqueName: \"kubernetes.io/projected/6ae2914d-5681-40ed-92de-a50b09c1c1ba-kube-api-access-4hqw6\") pod \"redhat-marketplace-xj7lj\" (UID: \"6ae2914d-5681-40ed-92de-a50b09c1c1ba\") " pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.456077 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ae2914d-5681-40ed-92de-a50b09c1c1ba-catalog-content\") pod \"redhat-marketplace-xj7lj\" (UID: \"6ae2914d-5681-40ed-92de-a50b09c1c1ba\") " pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.456142 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hqw6\" (UniqueName: \"kubernetes.io/projected/6ae2914d-5681-40ed-92de-a50b09c1c1ba-kube-api-access-4hqw6\") pod \"redhat-marketplace-xj7lj\" (UID: \"6ae2914d-5681-40ed-92de-a50b09c1c1ba\") " pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.456270 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ae2914d-5681-40ed-92de-a50b09c1c1ba-utilities\") pod \"redhat-marketplace-xj7lj\" (UID: \"6ae2914d-5681-40ed-92de-a50b09c1c1ba\") " pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.456650 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ae2914d-5681-40ed-92de-a50b09c1c1ba-catalog-content\") pod \"redhat-marketplace-xj7lj\" (UID: \"6ae2914d-5681-40ed-92de-a50b09c1c1ba\") " pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.456695 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ae2914d-5681-40ed-92de-a50b09c1c1ba-utilities\") pod \"redhat-marketplace-xj7lj\" (UID: \"6ae2914d-5681-40ed-92de-a50b09c1c1ba\") " pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.491238 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hqw6\" (UniqueName: \"kubernetes.io/projected/6ae2914d-5681-40ed-92de-a50b09c1c1ba-kube-api-access-4hqw6\") pod \"redhat-marketplace-xj7lj\" (UID: \"6ae2914d-5681-40ed-92de-a50b09c1c1ba\") " pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:37:53 crc kubenswrapper[4784]: I0123 07:37:53.599612 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:37:54 crc kubenswrapper[4784]: I0123 07:37:54.160993 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj7lj"] Jan 23 07:37:54 crc kubenswrapper[4784]: W0123 07:37:54.166107 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae2914d_5681_40ed_92de_a50b09c1c1ba.slice/crio-7c69d0f563d2da41aedb39f76732600cade31892b222f276597145ba489d7bea WatchSource:0}: Error finding container 7c69d0f563d2da41aedb39f76732600cade31892b222f276597145ba489d7bea: Status 404 returned error can't find the container with id 7c69d0f563d2da41aedb39f76732600cade31892b222f276597145ba489d7bea Jan 23 07:37:54 crc kubenswrapper[4784]: I0123 07:37:54.937224 4784 generic.go:334] "Generic (PLEG): container finished" podID="6ae2914d-5681-40ed-92de-a50b09c1c1ba" containerID="fd62ddb1dd4a5e783792ad467f07d41068c32d3609ef202f801dc69d9947c1e4" exitCode=0 Jan 23 07:37:54 crc kubenswrapper[4784]: I0123 07:37:54.937541 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj7lj" event={"ID":"6ae2914d-5681-40ed-92de-a50b09c1c1ba","Type":"ContainerDied","Data":"fd62ddb1dd4a5e783792ad467f07d41068c32d3609ef202f801dc69d9947c1e4"} Jan 23 07:37:54 crc kubenswrapper[4784]: I0123 07:37:54.937792 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj7lj" event={"ID":"6ae2914d-5681-40ed-92de-a50b09c1c1ba","Type":"ContainerStarted","Data":"7c69d0f563d2da41aedb39f76732600cade31892b222f276597145ba489d7bea"} Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.086554 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-q7sn8" Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.198688 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-nb6tb" Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.223815 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-hcqtn" Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.239716 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-lvmlf" Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.271216 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7c664964d9-t6kpc" Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.303036 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7" Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.448147 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-kl6d5" Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.501395 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-wzjzl" Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.611247 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-krp8w" Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.663246 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-82hzn" Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.721691 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-2wrsg" Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.755232 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.779303 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-c2btv" Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.830907 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c2zh7" Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.950938 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5b5d4f4b97-64mxt" Jan 23 07:37:55 crc kubenswrapper[4784]: I0123 07:37:55.952944 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj7lj" event={"ID":"6ae2914d-5681-40ed-92de-a50b09c1c1ba","Type":"ContainerStarted","Data":"ce3cbde5c1b34f8201ec480e763ab95c4ea63d901286669567bf2bdb27ff7eb2"} Jan 23 07:37:56 crc kubenswrapper[4784]: I0123 07:37:56.057903 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-gbncb" Jan 23 07:37:56 crc kubenswrapper[4784]: I0123 07:37:56.982165 4784 generic.go:334] "Generic (PLEG): container finished" podID="6ae2914d-5681-40ed-92de-a50b09c1c1ba" containerID="ce3cbde5c1b34f8201ec480e763ab95c4ea63d901286669567bf2bdb27ff7eb2" exitCode=0 Jan 23 07:37:56 crc kubenswrapper[4784]: I0123 07:37:56.982229 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj7lj" event={"ID":"6ae2914d-5681-40ed-92de-a50b09c1c1ba","Type":"ContainerDied","Data":"ce3cbde5c1b34f8201ec480e763ab95c4ea63d901286669567bf2bdb27ff7eb2"} Jan 23 07:38:03 crc kubenswrapper[4784]: I0123 07:38:03.067446 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj7lj" event={"ID":"6ae2914d-5681-40ed-92de-a50b09c1c1ba","Type":"ContainerStarted","Data":"970f9727dc122f75438bf72b25452999085619afabcca7f78c282d7b052c6565"} Jan 23 07:38:03 crc kubenswrapper[4784]: I0123 07:38:03.117184 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xj7lj" podStartSLOduration=4.265249616 podStartE2EDuration="10.117165526s" podCreationTimestamp="2026-01-23 07:37:53 +0000 UTC" firstStartedPulling="2026-01-23 07:37:54.940041172 +0000 UTC m=+4678.172549176" lastFinishedPulling="2026-01-23 07:38:00.791957112 +0000 UTC m=+4684.024465086" observedRunningTime="2026-01-23 07:38:03.112193204 +0000 UTC m=+4686.344701178" watchObservedRunningTime="2026-01-23 07:38:03.117165526 +0000 UTC m=+4686.349673500" Jan 23 07:38:03 crc kubenswrapper[4784]: I0123 07:38:03.600264 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:38:03 crc kubenswrapper[4784]: I0123 07:38:03.600630 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:38:04 crc kubenswrapper[4784]: I0123 07:38:04.810009 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-xj7lj" podUID="6ae2914d-5681-40ed-92de-a50b09c1c1ba" containerName="registry-server" probeResult="failure" output=< Jan 23 07:38:04 crc kubenswrapper[4784]: timeout: failed to connect service ":50051" within 1s Jan 23 07:38:04 crc kubenswrapper[4784]: > Jan 23 07:38:05 crc kubenswrapper[4784]: I0123 07:38:05.254654 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:38:05 crc kubenswrapper[4784]: E0123 07:38:05.254940 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:38:13 crc kubenswrapper[4784]: I0123 07:38:13.647112 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:38:13 crc kubenswrapper[4784]: I0123 07:38:13.710444 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:38:13 crc kubenswrapper[4784]: I0123 07:38:13.891866 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj7lj"] Jan 23 07:38:15 crc kubenswrapper[4784]: I0123 07:38:15.189246 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xj7lj" podUID="6ae2914d-5681-40ed-92de-a50b09c1c1ba" containerName="registry-server" containerID="cri-o://970f9727dc122f75438bf72b25452999085619afabcca7f78c282d7b052c6565" gracePeriod=2 Jan 23 07:38:16 crc kubenswrapper[4784]: I0123 07:38:16.220491 4784 generic.go:334] "Generic (PLEG): container finished" podID="6ae2914d-5681-40ed-92de-a50b09c1c1ba" containerID="970f9727dc122f75438bf72b25452999085619afabcca7f78c282d7b052c6565" exitCode=0 Jan 23 07:38:16 crc kubenswrapper[4784]: I0123 07:38:16.220563 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj7lj" event={"ID":"6ae2914d-5681-40ed-92de-a50b09c1c1ba","Type":"ContainerDied","Data":"970f9727dc122f75438bf72b25452999085619afabcca7f78c282d7b052c6565"} Jan 23 07:38:16 crc kubenswrapper[4784]: I0123 07:38:16.533640 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:38:16 crc kubenswrapper[4784]: I0123 07:38:16.677497 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ae2914d-5681-40ed-92de-a50b09c1c1ba-catalog-content\") pod \"6ae2914d-5681-40ed-92de-a50b09c1c1ba\" (UID: \"6ae2914d-5681-40ed-92de-a50b09c1c1ba\") " Jan 23 07:38:16 crc kubenswrapper[4784]: I0123 07:38:16.677578 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ae2914d-5681-40ed-92de-a50b09c1c1ba-utilities\") pod \"6ae2914d-5681-40ed-92de-a50b09c1c1ba\" (UID: \"6ae2914d-5681-40ed-92de-a50b09c1c1ba\") " Jan 23 07:38:16 crc kubenswrapper[4784]: I0123 07:38:16.677657 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hqw6\" (UniqueName: \"kubernetes.io/projected/6ae2914d-5681-40ed-92de-a50b09c1c1ba-kube-api-access-4hqw6\") pod \"6ae2914d-5681-40ed-92de-a50b09c1c1ba\" (UID: \"6ae2914d-5681-40ed-92de-a50b09c1c1ba\") " Jan 23 07:38:16 crc kubenswrapper[4784]: I0123 07:38:16.678507 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae2914d-5681-40ed-92de-a50b09c1c1ba-utilities" (OuterVolumeSpecName: "utilities") pod "6ae2914d-5681-40ed-92de-a50b09c1c1ba" (UID: "6ae2914d-5681-40ed-92de-a50b09c1c1ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:38:16 crc kubenswrapper[4784]: I0123 07:38:16.688560 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ae2914d-5681-40ed-92de-a50b09c1c1ba-kube-api-access-4hqw6" (OuterVolumeSpecName: "kube-api-access-4hqw6") pod "6ae2914d-5681-40ed-92de-a50b09c1c1ba" (UID: "6ae2914d-5681-40ed-92de-a50b09c1c1ba"). InnerVolumeSpecName "kube-api-access-4hqw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:38:16 crc kubenswrapper[4784]: I0123 07:38:16.701638 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae2914d-5681-40ed-92de-a50b09c1c1ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ae2914d-5681-40ed-92de-a50b09c1c1ba" (UID: "6ae2914d-5681-40ed-92de-a50b09c1c1ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:38:16 crc kubenswrapper[4784]: I0123 07:38:16.779940 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hqw6\" (UniqueName: \"kubernetes.io/projected/6ae2914d-5681-40ed-92de-a50b09c1c1ba-kube-api-access-4hqw6\") on node \"crc\" DevicePath \"\"" Jan 23 07:38:16 crc kubenswrapper[4784]: I0123 07:38:16.779977 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ae2914d-5681-40ed-92de-a50b09c1c1ba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:38:16 crc kubenswrapper[4784]: I0123 07:38:16.779987 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ae2914d-5681-40ed-92de-a50b09c1c1ba-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:38:17 crc kubenswrapper[4784]: I0123 07:38:17.232184 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj7lj" event={"ID":"6ae2914d-5681-40ed-92de-a50b09c1c1ba","Type":"ContainerDied","Data":"7c69d0f563d2da41aedb39f76732600cade31892b222f276597145ba489d7bea"} Jan 23 07:38:17 crc kubenswrapper[4784]: I0123 07:38:17.232237 4784 scope.go:117] "RemoveContainer" containerID="970f9727dc122f75438bf72b25452999085619afabcca7f78c282d7b052c6565" Jan 23 07:38:17 crc kubenswrapper[4784]: I0123 07:38:17.232291 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj7lj" Jan 23 07:38:17 crc kubenswrapper[4784]: I0123 07:38:17.252640 4784 scope.go:117] "RemoveContainer" containerID="ce3cbde5c1b34f8201ec480e763ab95c4ea63d901286669567bf2bdb27ff7eb2" Jan 23 07:38:17 crc kubenswrapper[4784]: I0123 07:38:17.289486 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj7lj"] Jan 23 07:38:17 crc kubenswrapper[4784]: I0123 07:38:17.308706 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj7lj"] Jan 23 07:38:17 crc kubenswrapper[4784]: I0123 07:38:17.320928 4784 scope.go:117] "RemoveContainer" containerID="fd62ddb1dd4a5e783792ad467f07d41068c32d3609ef202f801dc69d9947c1e4" Jan 23 07:38:19 crc kubenswrapper[4784]: I0123 07:38:19.254319 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:38:19 crc kubenswrapper[4784]: E0123 07:38:19.254976 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:38:19 crc kubenswrapper[4784]: I0123 07:38:19.266436 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ae2914d-5681-40ed-92de-a50b09c1c1ba" path="/var/lib/kubelet/pods/6ae2914d-5681-40ed-92de-a50b09c1c1ba/volumes" Jan 23 07:38:22 crc kubenswrapper[4784]: I0123 07:38:22.687775 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8589677cff-dzl65" Jan 23 07:38:33 crc kubenswrapper[4784]: I0123 07:38:33.253886 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:38:33 crc kubenswrapper[4784]: E0123 07:38:33.254876 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:38:48 crc kubenswrapper[4784]: I0123 07:38:48.254206 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:38:48 crc kubenswrapper[4784]: E0123 07:38:48.255172 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:38:50 crc kubenswrapper[4784]: I0123 07:38:50.770779 4784 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="85680fc8-18ee-4984-8bdb-a489d1e71d39" containerName="galera" probeResult="failure" output="command timed out" Jan 23 07:38:50 crc kubenswrapper[4784]: I0123 07:38:50.771292 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="85680fc8-18ee-4984-8bdb-a489d1e71d39" containerName="galera" probeResult="failure" output="command timed out" Jan 23 07:38:59 crc kubenswrapper[4784]: I0123 07:38:59.254636 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:38:59 crc kubenswrapper[4784]: I0123 07:38:59.767692 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"7f2bad361834119d810d115649299c2d95460097ac625999f8513e258e612407"} Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.724247 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ckq5j"] Jan 23 07:39:59 crc kubenswrapper[4784]: E0123 07:39:59.726003 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae2914d-5681-40ed-92de-a50b09c1c1ba" containerName="extract-utilities" Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.726024 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae2914d-5681-40ed-92de-a50b09c1c1ba" containerName="extract-utilities" Jan 23 07:39:59 crc kubenswrapper[4784]: E0123 07:39:59.726044 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae2914d-5681-40ed-92de-a50b09c1c1ba" containerName="extract-content" Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.726054 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae2914d-5681-40ed-92de-a50b09c1c1ba" containerName="extract-content" Jan 23 07:39:59 crc kubenswrapper[4784]: E0123 07:39:59.726105 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae2914d-5681-40ed-92de-a50b09c1c1ba" containerName="registry-server" Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.726113 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae2914d-5681-40ed-92de-a50b09c1c1ba" containerName="registry-server" Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.726346 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ae2914d-5681-40ed-92de-a50b09c1c1ba" containerName="registry-server" Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.728169 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.736908 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ckq5j"] Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.787903 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98c2d1b3-314b-44de-b177-343d013a6649-catalog-content\") pod \"certified-operators-ckq5j\" (UID: \"98c2d1b3-314b-44de-b177-343d013a6649\") " pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.788220 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjzqg\" (UniqueName: \"kubernetes.io/projected/98c2d1b3-314b-44de-b177-343d013a6649-kube-api-access-xjzqg\") pod \"certified-operators-ckq5j\" (UID: \"98c2d1b3-314b-44de-b177-343d013a6649\") " pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.788576 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98c2d1b3-314b-44de-b177-343d013a6649-utilities\") pod \"certified-operators-ckq5j\" (UID: \"98c2d1b3-314b-44de-b177-343d013a6649\") " pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.890910 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98c2d1b3-314b-44de-b177-343d013a6649-catalog-content\") pod \"certified-operators-ckq5j\" (UID: \"98c2d1b3-314b-44de-b177-343d013a6649\") " pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.891036 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjzqg\" (UniqueName: \"kubernetes.io/projected/98c2d1b3-314b-44de-b177-343d013a6649-kube-api-access-xjzqg\") pod \"certified-operators-ckq5j\" (UID: \"98c2d1b3-314b-44de-b177-343d013a6649\") " pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.891124 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98c2d1b3-314b-44de-b177-343d013a6649-utilities\") pod \"certified-operators-ckq5j\" (UID: \"98c2d1b3-314b-44de-b177-343d013a6649\") " pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.892054 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98c2d1b3-314b-44de-b177-343d013a6649-catalog-content\") pod \"certified-operators-ckq5j\" (UID: \"98c2d1b3-314b-44de-b177-343d013a6649\") " pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.892066 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98c2d1b3-314b-44de-b177-343d013a6649-utilities\") pod \"certified-operators-ckq5j\" (UID: \"98c2d1b3-314b-44de-b177-343d013a6649\") " pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:39:59 crc kubenswrapper[4784]: I0123 07:39:59.918124 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjzqg\" (UniqueName: \"kubernetes.io/projected/98c2d1b3-314b-44de-b177-343d013a6649-kube-api-access-xjzqg\") pod \"certified-operators-ckq5j\" (UID: \"98c2d1b3-314b-44de-b177-343d013a6649\") " pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:40:00 crc kubenswrapper[4784]: I0123 07:40:00.082050 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:40:00 crc kubenswrapper[4784]: I0123 07:40:00.624761 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ckq5j"] Jan 23 07:40:01 crc kubenswrapper[4784]: I0123 07:40:01.682994 4784 generic.go:334] "Generic (PLEG): container finished" podID="98c2d1b3-314b-44de-b177-343d013a6649" containerID="793229a783857ff2ed96c91bb675daf7ad9aa21ab5f15301aea2ba36a990e26f" exitCode=0 Jan 23 07:40:01 crc kubenswrapper[4784]: I0123 07:40:01.683073 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckq5j" event={"ID":"98c2d1b3-314b-44de-b177-343d013a6649","Type":"ContainerDied","Data":"793229a783857ff2ed96c91bb675daf7ad9aa21ab5f15301aea2ba36a990e26f"} Jan 23 07:40:01 crc kubenswrapper[4784]: I0123 07:40:01.683713 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckq5j" event={"ID":"98c2d1b3-314b-44de-b177-343d013a6649","Type":"ContainerStarted","Data":"faad4a7dfa9f09f9455226f70acdffc79dc37cc885f03e3bd9f227d2ea6220d4"} Jan 23 07:40:03 crc kubenswrapper[4784]: I0123 07:40:03.720288 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckq5j" event={"ID":"98c2d1b3-314b-44de-b177-343d013a6649","Type":"ContainerStarted","Data":"0ffb4e9c2a85aacdd963061b07252325237141d4f93a8242f226cf2e8f97213f"} Jan 23 07:40:04 crc kubenswrapper[4784]: I0123 07:40:04.742588 4784 generic.go:334] "Generic (PLEG): container finished" podID="98c2d1b3-314b-44de-b177-343d013a6649" containerID="0ffb4e9c2a85aacdd963061b07252325237141d4f93a8242f226cf2e8f97213f" exitCode=0 Jan 23 07:40:04 crc kubenswrapper[4784]: I0123 07:40:04.742691 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckq5j" event={"ID":"98c2d1b3-314b-44de-b177-343d013a6649","Type":"ContainerDied","Data":"0ffb4e9c2a85aacdd963061b07252325237141d4f93a8242f226cf2e8f97213f"} Jan 23 07:40:05 crc kubenswrapper[4784]: I0123 07:40:05.762822 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckq5j" event={"ID":"98c2d1b3-314b-44de-b177-343d013a6649","Type":"ContainerStarted","Data":"721a8a89e3957bb01b88e06cd437b45f88c4a4cf07ae9f9f63789e8986ef598c"} Jan 23 07:40:05 crc kubenswrapper[4784]: I0123 07:40:05.799866 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ckq5j" podStartSLOduration=3.225791582 podStartE2EDuration="6.799839041s" podCreationTimestamp="2026-01-23 07:39:59 +0000 UTC" firstStartedPulling="2026-01-23 07:40:01.68505169 +0000 UTC m=+4804.917559674" lastFinishedPulling="2026-01-23 07:40:05.259099149 +0000 UTC m=+4808.491607133" observedRunningTime="2026-01-23 07:40:05.785003089 +0000 UTC m=+4809.017511083" watchObservedRunningTime="2026-01-23 07:40:05.799839041 +0000 UTC m=+4809.032347035" Jan 23 07:40:10 crc kubenswrapper[4784]: I0123 07:40:10.082986 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:40:10 crc kubenswrapper[4784]: I0123 07:40:10.084662 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:40:10 crc kubenswrapper[4784]: I0123 07:40:10.172603 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:40:10 crc kubenswrapper[4784]: I0123 07:40:10.940085 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:40:11 crc kubenswrapper[4784]: I0123 07:40:11.009978 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ckq5j"] Jan 23 07:40:12 crc kubenswrapper[4784]: I0123 07:40:12.876191 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ckq5j" podUID="98c2d1b3-314b-44de-b177-343d013a6649" containerName="registry-server" containerID="cri-o://721a8a89e3957bb01b88e06cd437b45f88c4a4cf07ae9f9f63789e8986ef598c" gracePeriod=2 Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.393481 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.520531 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98c2d1b3-314b-44de-b177-343d013a6649-catalog-content\") pod \"98c2d1b3-314b-44de-b177-343d013a6649\" (UID: \"98c2d1b3-314b-44de-b177-343d013a6649\") " Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.520821 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjzqg\" (UniqueName: \"kubernetes.io/projected/98c2d1b3-314b-44de-b177-343d013a6649-kube-api-access-xjzqg\") pod \"98c2d1b3-314b-44de-b177-343d013a6649\" (UID: \"98c2d1b3-314b-44de-b177-343d013a6649\") " Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.521384 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98c2d1b3-314b-44de-b177-343d013a6649-utilities\") pod \"98c2d1b3-314b-44de-b177-343d013a6649\" (UID: \"98c2d1b3-314b-44de-b177-343d013a6649\") " Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.522240 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98c2d1b3-314b-44de-b177-343d013a6649-utilities" (OuterVolumeSpecName: "utilities") pod "98c2d1b3-314b-44de-b177-343d013a6649" (UID: "98c2d1b3-314b-44de-b177-343d013a6649"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.522426 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98c2d1b3-314b-44de-b177-343d013a6649-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.529299 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c2d1b3-314b-44de-b177-343d013a6649-kube-api-access-xjzqg" (OuterVolumeSpecName: "kube-api-access-xjzqg") pod "98c2d1b3-314b-44de-b177-343d013a6649" (UID: "98c2d1b3-314b-44de-b177-343d013a6649"). InnerVolumeSpecName "kube-api-access-xjzqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.560942 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98c2d1b3-314b-44de-b177-343d013a6649-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98c2d1b3-314b-44de-b177-343d013a6649" (UID: "98c2d1b3-314b-44de-b177-343d013a6649"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.625331 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98c2d1b3-314b-44de-b177-343d013a6649-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.625387 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjzqg\" (UniqueName: \"kubernetes.io/projected/98c2d1b3-314b-44de-b177-343d013a6649-kube-api-access-xjzqg\") on node \"crc\" DevicePath \"\"" Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.898641 4784 generic.go:334] "Generic (PLEG): container finished" podID="98c2d1b3-314b-44de-b177-343d013a6649" containerID="721a8a89e3957bb01b88e06cd437b45f88c4a4cf07ae9f9f63789e8986ef598c" exitCode=0 Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.898821 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckq5j" event={"ID":"98c2d1b3-314b-44de-b177-343d013a6649","Type":"ContainerDied","Data":"721a8a89e3957bb01b88e06cd437b45f88c4a4cf07ae9f9f63789e8986ef598c"} Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.900038 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckq5j" event={"ID":"98c2d1b3-314b-44de-b177-343d013a6649","Type":"ContainerDied","Data":"faad4a7dfa9f09f9455226f70acdffc79dc37cc885f03e3bd9f227d2ea6220d4"} Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.900077 4784 scope.go:117] "RemoveContainer" containerID="721a8a89e3957bb01b88e06cd437b45f88c4a4cf07ae9f9f63789e8986ef598c" Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.899033 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckq5j" Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.949812 4784 scope.go:117] "RemoveContainer" containerID="0ffb4e9c2a85aacdd963061b07252325237141d4f93a8242f226cf2e8f97213f" Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.966030 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ckq5j"] Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.978806 4784 scope.go:117] "RemoveContainer" containerID="793229a783857ff2ed96c91bb675daf7ad9aa21ab5f15301aea2ba36a990e26f" Jan 23 07:40:13 crc kubenswrapper[4784]: I0123 07:40:13.979551 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ckq5j"] Jan 23 07:40:14 crc kubenswrapper[4784]: I0123 07:40:14.045869 4784 scope.go:117] "RemoveContainer" containerID="721a8a89e3957bb01b88e06cd437b45f88c4a4cf07ae9f9f63789e8986ef598c" Jan 23 07:40:14 crc kubenswrapper[4784]: E0123 07:40:14.046691 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721a8a89e3957bb01b88e06cd437b45f88c4a4cf07ae9f9f63789e8986ef598c\": container with ID starting with 721a8a89e3957bb01b88e06cd437b45f88c4a4cf07ae9f9f63789e8986ef598c not found: ID does not exist" containerID="721a8a89e3957bb01b88e06cd437b45f88c4a4cf07ae9f9f63789e8986ef598c" Jan 23 07:40:14 crc kubenswrapper[4784]: I0123 07:40:14.046736 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721a8a89e3957bb01b88e06cd437b45f88c4a4cf07ae9f9f63789e8986ef598c"} err="failed to get container status \"721a8a89e3957bb01b88e06cd437b45f88c4a4cf07ae9f9f63789e8986ef598c\": rpc error: code = NotFound desc = could not find container \"721a8a89e3957bb01b88e06cd437b45f88c4a4cf07ae9f9f63789e8986ef598c\": container with ID starting with 721a8a89e3957bb01b88e06cd437b45f88c4a4cf07ae9f9f63789e8986ef598c not found: ID does not exist" Jan 23 07:40:14 crc kubenswrapper[4784]: I0123 07:40:14.046794 4784 scope.go:117] "RemoveContainer" containerID="0ffb4e9c2a85aacdd963061b07252325237141d4f93a8242f226cf2e8f97213f" Jan 23 07:40:14 crc kubenswrapper[4784]: E0123 07:40:14.047154 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ffb4e9c2a85aacdd963061b07252325237141d4f93a8242f226cf2e8f97213f\": container with ID starting with 0ffb4e9c2a85aacdd963061b07252325237141d4f93a8242f226cf2e8f97213f not found: ID does not exist" containerID="0ffb4e9c2a85aacdd963061b07252325237141d4f93a8242f226cf2e8f97213f" Jan 23 07:40:14 crc kubenswrapper[4784]: I0123 07:40:14.047419 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ffb4e9c2a85aacdd963061b07252325237141d4f93a8242f226cf2e8f97213f"} err="failed to get container status \"0ffb4e9c2a85aacdd963061b07252325237141d4f93a8242f226cf2e8f97213f\": rpc error: code = NotFound desc = could not find container \"0ffb4e9c2a85aacdd963061b07252325237141d4f93a8242f226cf2e8f97213f\": container with ID starting with 0ffb4e9c2a85aacdd963061b07252325237141d4f93a8242f226cf2e8f97213f not found: ID does not exist" Jan 23 07:40:14 crc kubenswrapper[4784]: I0123 07:40:14.047457 4784 scope.go:117] "RemoveContainer" containerID="793229a783857ff2ed96c91bb675daf7ad9aa21ab5f15301aea2ba36a990e26f" Jan 23 07:40:14 crc kubenswrapper[4784]: E0123 07:40:14.047807 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"793229a783857ff2ed96c91bb675daf7ad9aa21ab5f15301aea2ba36a990e26f\": container with ID starting with 793229a783857ff2ed96c91bb675daf7ad9aa21ab5f15301aea2ba36a990e26f not found: ID does not exist" containerID="793229a783857ff2ed96c91bb675daf7ad9aa21ab5f15301aea2ba36a990e26f" Jan 23 07:40:14 crc kubenswrapper[4784]: I0123 07:40:14.047860 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"793229a783857ff2ed96c91bb675daf7ad9aa21ab5f15301aea2ba36a990e26f"} err="failed to get container status \"793229a783857ff2ed96c91bb675daf7ad9aa21ab5f15301aea2ba36a990e26f\": rpc error: code = NotFound desc = could not find container \"793229a783857ff2ed96c91bb675daf7ad9aa21ab5f15301aea2ba36a990e26f\": container with ID starting with 793229a783857ff2ed96c91bb675daf7ad9aa21ab5f15301aea2ba36a990e26f not found: ID does not exist" Jan 23 07:40:15 crc kubenswrapper[4784]: I0123 07:40:15.263612 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98c2d1b3-314b-44de-b177-343d013a6649" path="/var/lib/kubelet/pods/98c2d1b3-314b-44de-b177-343d013a6649/volumes" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.622502 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5pjmq"] Jan 23 07:40:24 crc kubenswrapper[4784]: E0123 07:40:24.623478 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c2d1b3-314b-44de-b177-343d013a6649" containerName="registry-server" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.623495 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c2d1b3-314b-44de-b177-343d013a6649" containerName="registry-server" Jan 23 07:40:24 crc kubenswrapper[4784]: E0123 07:40:24.623516 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c2d1b3-314b-44de-b177-343d013a6649" containerName="extract-utilities" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.623523 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c2d1b3-314b-44de-b177-343d013a6649" containerName="extract-utilities" Jan 23 07:40:24 crc kubenswrapper[4784]: E0123 07:40:24.623558 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c2d1b3-314b-44de-b177-343d013a6649" containerName="extract-content" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.623566 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c2d1b3-314b-44de-b177-343d013a6649" containerName="extract-content" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.623880 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="98c2d1b3-314b-44de-b177-343d013a6649" containerName="registry-server" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.625559 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.636356 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5pjmq"] Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.686836 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a17b76-86eb-461a-9358-883f6347f3e5-catalog-content\") pod \"community-operators-5pjmq\" (UID: \"c4a17b76-86eb-461a-9358-883f6347f3e5\") " pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.686895 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a17b76-86eb-461a-9358-883f6347f3e5-utilities\") pod \"community-operators-5pjmq\" (UID: \"c4a17b76-86eb-461a-9358-883f6347f3e5\") " pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.686930 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2m7j\" (UniqueName: \"kubernetes.io/projected/c4a17b76-86eb-461a-9358-883f6347f3e5-kube-api-access-d2m7j\") pod \"community-operators-5pjmq\" (UID: \"c4a17b76-86eb-461a-9358-883f6347f3e5\") " pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.788896 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a17b76-86eb-461a-9358-883f6347f3e5-catalog-content\") pod \"community-operators-5pjmq\" (UID: \"c4a17b76-86eb-461a-9358-883f6347f3e5\") " pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.788953 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a17b76-86eb-461a-9358-883f6347f3e5-utilities\") pod \"community-operators-5pjmq\" (UID: \"c4a17b76-86eb-461a-9358-883f6347f3e5\") " pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.788983 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2m7j\" (UniqueName: \"kubernetes.io/projected/c4a17b76-86eb-461a-9358-883f6347f3e5-kube-api-access-d2m7j\") pod \"community-operators-5pjmq\" (UID: \"c4a17b76-86eb-461a-9358-883f6347f3e5\") " pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.789532 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a17b76-86eb-461a-9358-883f6347f3e5-catalog-content\") pod \"community-operators-5pjmq\" (UID: \"c4a17b76-86eb-461a-9358-883f6347f3e5\") " pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.789719 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a17b76-86eb-461a-9358-883f6347f3e5-utilities\") pod \"community-operators-5pjmq\" (UID: \"c4a17b76-86eb-461a-9358-883f6347f3e5\") " pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.814433 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2m7j\" (UniqueName: \"kubernetes.io/projected/c4a17b76-86eb-461a-9358-883f6347f3e5-kube-api-access-d2m7j\") pod \"community-operators-5pjmq\" (UID: \"c4a17b76-86eb-461a-9358-883f6347f3e5\") " pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:24 crc kubenswrapper[4784]: I0123 07:40:24.948493 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:25 crc kubenswrapper[4784]: I0123 07:40:25.457531 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5pjmq"] Jan 23 07:40:26 crc kubenswrapper[4784]: I0123 07:40:26.032924 4784 generic.go:334] "Generic (PLEG): container finished" podID="c4a17b76-86eb-461a-9358-883f6347f3e5" containerID="c9d658af972be848f316b122a2d48688ec01564716d77525f3ce480bf85afe5a" exitCode=0 Jan 23 07:40:26 crc kubenswrapper[4784]: I0123 07:40:26.032981 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pjmq" event={"ID":"c4a17b76-86eb-461a-9358-883f6347f3e5","Type":"ContainerDied","Data":"c9d658af972be848f316b122a2d48688ec01564716d77525f3ce480bf85afe5a"} Jan 23 07:40:26 crc kubenswrapper[4784]: I0123 07:40:26.033319 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pjmq" event={"ID":"c4a17b76-86eb-461a-9358-883f6347f3e5","Type":"ContainerStarted","Data":"4fcaae869474e49ce6c31c942df796ba44705e39ea877ef37aaafdaa4ad771f6"} Jan 23 07:40:27 crc kubenswrapper[4784]: I0123 07:40:27.045832 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pjmq" event={"ID":"c4a17b76-86eb-461a-9358-883f6347f3e5","Type":"ContainerStarted","Data":"c92f4be4f2f3416bf3f8e592948e58c411224f5f96bfe99dc265c8dbe65240e8"} Jan 23 07:40:28 crc kubenswrapper[4784]: I0123 07:40:28.057583 4784 generic.go:334] "Generic (PLEG): container finished" podID="c4a17b76-86eb-461a-9358-883f6347f3e5" containerID="c92f4be4f2f3416bf3f8e592948e58c411224f5f96bfe99dc265c8dbe65240e8" exitCode=0 Jan 23 07:40:28 crc kubenswrapper[4784]: I0123 07:40:28.057647 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pjmq" event={"ID":"c4a17b76-86eb-461a-9358-883f6347f3e5","Type":"ContainerDied","Data":"c92f4be4f2f3416bf3f8e592948e58c411224f5f96bfe99dc265c8dbe65240e8"} Jan 23 07:40:29 crc kubenswrapper[4784]: I0123 07:40:29.071824 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pjmq" event={"ID":"c4a17b76-86eb-461a-9358-883f6347f3e5","Type":"ContainerStarted","Data":"8eefff8a4bc1a8df96a2515bfb3011669d82b201b300052eae0a009273b3db49"} Jan 23 07:40:29 crc kubenswrapper[4784]: I0123 07:40:29.095434 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5pjmq" podStartSLOduration=2.648361738 podStartE2EDuration="5.095408235s" podCreationTimestamp="2026-01-23 07:40:24 +0000 UTC" firstStartedPulling="2026-01-23 07:40:26.035215132 +0000 UTC m=+4829.267723106" lastFinishedPulling="2026-01-23 07:40:28.482261619 +0000 UTC m=+4831.714769603" observedRunningTime="2026-01-23 07:40:29.089309726 +0000 UTC m=+4832.321817720" watchObservedRunningTime="2026-01-23 07:40:29.095408235 +0000 UTC m=+4832.327916229" Jan 23 07:40:34 crc kubenswrapper[4784]: I0123 07:40:34.949505 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:34 crc kubenswrapper[4784]: I0123 07:40:34.950342 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:34 crc kubenswrapper[4784]: I0123 07:40:34.998430 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:35 crc kubenswrapper[4784]: I0123 07:40:35.178038 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:38 crc kubenswrapper[4784]: I0123 07:40:38.226591 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5pjmq"] Jan 23 07:40:38 crc kubenswrapper[4784]: I0123 07:40:38.228451 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5pjmq" podUID="c4a17b76-86eb-461a-9358-883f6347f3e5" containerName="registry-server" containerID="cri-o://8eefff8a4bc1a8df96a2515bfb3011669d82b201b300052eae0a009273b3db49" gracePeriod=2 Jan 23 07:40:39 crc kubenswrapper[4784]: I0123 07:40:39.177669 4784 generic.go:334] "Generic (PLEG): container finished" podID="c4a17b76-86eb-461a-9358-883f6347f3e5" containerID="8eefff8a4bc1a8df96a2515bfb3011669d82b201b300052eae0a009273b3db49" exitCode=0 Jan 23 07:40:39 crc kubenswrapper[4784]: I0123 07:40:39.177866 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pjmq" event={"ID":"c4a17b76-86eb-461a-9358-883f6347f3e5","Type":"ContainerDied","Data":"8eefff8a4bc1a8df96a2515bfb3011669d82b201b300052eae0a009273b3db49"} Jan 23 07:40:39 crc kubenswrapper[4784]: I0123 07:40:39.269795 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:39 crc kubenswrapper[4784]: I0123 07:40:39.370487 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a17b76-86eb-461a-9358-883f6347f3e5-catalog-content\") pod \"c4a17b76-86eb-461a-9358-883f6347f3e5\" (UID: \"c4a17b76-86eb-461a-9358-883f6347f3e5\") " Jan 23 07:40:39 crc kubenswrapper[4784]: I0123 07:40:39.370570 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a17b76-86eb-461a-9358-883f6347f3e5-utilities\") pod \"c4a17b76-86eb-461a-9358-883f6347f3e5\" (UID: \"c4a17b76-86eb-461a-9358-883f6347f3e5\") " Jan 23 07:40:39 crc kubenswrapper[4784]: I0123 07:40:39.370696 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2m7j\" (UniqueName: \"kubernetes.io/projected/c4a17b76-86eb-461a-9358-883f6347f3e5-kube-api-access-d2m7j\") pod \"c4a17b76-86eb-461a-9358-883f6347f3e5\" (UID: \"c4a17b76-86eb-461a-9358-883f6347f3e5\") " Jan 23 07:40:39 crc kubenswrapper[4784]: I0123 07:40:39.372235 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4a17b76-86eb-461a-9358-883f6347f3e5-utilities" (OuterVolumeSpecName: "utilities") pod "c4a17b76-86eb-461a-9358-883f6347f3e5" (UID: "c4a17b76-86eb-461a-9358-883f6347f3e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:40:39 crc kubenswrapper[4784]: I0123 07:40:39.377388 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a17b76-86eb-461a-9358-883f6347f3e5-kube-api-access-d2m7j" (OuterVolumeSpecName: "kube-api-access-d2m7j") pod "c4a17b76-86eb-461a-9358-883f6347f3e5" (UID: "c4a17b76-86eb-461a-9358-883f6347f3e5"). InnerVolumeSpecName "kube-api-access-d2m7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:40:39 crc kubenswrapper[4784]: I0123 07:40:39.429110 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4a17b76-86eb-461a-9358-883f6347f3e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4a17b76-86eb-461a-9358-883f6347f3e5" (UID: "c4a17b76-86eb-461a-9358-883f6347f3e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:40:39 crc kubenswrapper[4784]: I0123 07:40:39.473402 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a17b76-86eb-461a-9358-883f6347f3e5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:40:39 crc kubenswrapper[4784]: I0123 07:40:39.473450 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a17b76-86eb-461a-9358-883f6347f3e5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:40:39 crc kubenswrapper[4784]: I0123 07:40:39.473471 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2m7j\" (UniqueName: \"kubernetes.io/projected/c4a17b76-86eb-461a-9358-883f6347f3e5-kube-api-access-d2m7j\") on node \"crc\" DevicePath \"\"" Jan 23 07:40:40 crc kubenswrapper[4784]: I0123 07:40:40.196039 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pjmq" event={"ID":"c4a17b76-86eb-461a-9358-883f6347f3e5","Type":"ContainerDied","Data":"4fcaae869474e49ce6c31c942df796ba44705e39ea877ef37aaafdaa4ad771f6"} Jan 23 07:40:40 crc kubenswrapper[4784]: I0123 07:40:40.196209 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pjmq" Jan 23 07:40:40 crc kubenswrapper[4784]: I0123 07:40:40.196615 4784 scope.go:117] "RemoveContainer" containerID="8eefff8a4bc1a8df96a2515bfb3011669d82b201b300052eae0a009273b3db49" Jan 23 07:40:40 crc kubenswrapper[4784]: I0123 07:40:40.235162 4784 scope.go:117] "RemoveContainer" containerID="c92f4be4f2f3416bf3f8e592948e58c411224f5f96bfe99dc265c8dbe65240e8" Jan 23 07:40:40 crc kubenswrapper[4784]: I0123 07:40:40.262813 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5pjmq"] Jan 23 07:40:40 crc kubenswrapper[4784]: I0123 07:40:40.275838 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5pjmq"] Jan 23 07:40:40 crc kubenswrapper[4784]: I0123 07:40:40.399350 4784 scope.go:117] "RemoveContainer" containerID="c9d658af972be848f316b122a2d48688ec01564716d77525f3ce480bf85afe5a" Jan 23 07:40:41 crc kubenswrapper[4784]: I0123 07:40:41.270003 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4a17b76-86eb-461a-9358-883f6347f3e5" path="/var/lib/kubelet/pods/c4a17b76-86eb-461a-9358-883f6347f3e5/volumes" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.199570 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-884l2"] Jan 23 07:40:52 crc kubenswrapper[4784]: E0123 07:40:52.200899 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a17b76-86eb-461a-9358-883f6347f3e5" containerName="extract-content" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.201062 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a17b76-86eb-461a-9358-883f6347f3e5" containerName="extract-content" Jan 23 07:40:52 crc kubenswrapper[4784]: E0123 07:40:52.201124 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a17b76-86eb-461a-9358-883f6347f3e5" containerName="extract-utilities" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.201138 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a17b76-86eb-461a-9358-883f6347f3e5" containerName="extract-utilities" Jan 23 07:40:52 crc kubenswrapper[4784]: E0123 07:40:52.201178 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a17b76-86eb-461a-9358-883f6347f3e5" containerName="registry-server" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.201193 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a17b76-86eb-461a-9358-883f6347f3e5" containerName="registry-server" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.201548 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a17b76-86eb-461a-9358-883f6347f3e5" containerName="registry-server" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.204136 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.218896 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-884l2"] Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.310879 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77990d86-a028-40df-96fb-1ea611f3aefb-catalog-content\") pod \"redhat-operators-884l2\" (UID: \"77990d86-a028-40df-96fb-1ea611f3aefb\") " pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.310970 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmw57\" (UniqueName: \"kubernetes.io/projected/77990d86-a028-40df-96fb-1ea611f3aefb-kube-api-access-qmw57\") pod \"redhat-operators-884l2\" (UID: \"77990d86-a028-40df-96fb-1ea611f3aefb\") " pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.311156 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77990d86-a028-40df-96fb-1ea611f3aefb-utilities\") pod \"redhat-operators-884l2\" (UID: \"77990d86-a028-40df-96fb-1ea611f3aefb\") " pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.413055 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77990d86-a028-40df-96fb-1ea611f3aefb-utilities\") pod \"redhat-operators-884l2\" (UID: \"77990d86-a028-40df-96fb-1ea611f3aefb\") " pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.413246 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77990d86-a028-40df-96fb-1ea611f3aefb-catalog-content\") pod \"redhat-operators-884l2\" (UID: \"77990d86-a028-40df-96fb-1ea611f3aefb\") " pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.413282 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmw57\" (UniqueName: \"kubernetes.io/projected/77990d86-a028-40df-96fb-1ea611f3aefb-kube-api-access-qmw57\") pod \"redhat-operators-884l2\" (UID: \"77990d86-a028-40df-96fb-1ea611f3aefb\") " pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.413577 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77990d86-a028-40df-96fb-1ea611f3aefb-utilities\") pod \"redhat-operators-884l2\" (UID: \"77990d86-a028-40df-96fb-1ea611f3aefb\") " pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.413965 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77990d86-a028-40df-96fb-1ea611f3aefb-catalog-content\") pod \"redhat-operators-884l2\" (UID: \"77990d86-a028-40df-96fb-1ea611f3aefb\") " pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.437079 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmw57\" (UniqueName: \"kubernetes.io/projected/77990d86-a028-40df-96fb-1ea611f3aefb-kube-api-access-qmw57\") pod \"redhat-operators-884l2\" (UID: \"77990d86-a028-40df-96fb-1ea611f3aefb\") " pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:40:52 crc kubenswrapper[4784]: I0123 07:40:52.537301 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:40:53 crc kubenswrapper[4784]: I0123 07:40:53.024439 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-884l2"] Jan 23 07:40:53 crc kubenswrapper[4784]: I0123 07:40:53.410508 4784 generic.go:334] "Generic (PLEG): container finished" podID="77990d86-a028-40df-96fb-1ea611f3aefb" containerID="a74c10a523b1bc5a636d1cd81bc949798951ee31ed5034b16cb32d2564ec41dc" exitCode=0 Jan 23 07:40:53 crc kubenswrapper[4784]: I0123 07:40:53.410621 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-884l2" event={"ID":"77990d86-a028-40df-96fb-1ea611f3aefb","Type":"ContainerDied","Data":"a74c10a523b1bc5a636d1cd81bc949798951ee31ed5034b16cb32d2564ec41dc"} Jan 23 07:40:53 crc kubenswrapper[4784]: I0123 07:40:53.410910 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-884l2" event={"ID":"77990d86-a028-40df-96fb-1ea611f3aefb","Type":"ContainerStarted","Data":"ec7154afc111b32b1c909647304e4f57e6e9bc19620d1453a44a9053f4c44654"} Jan 23 07:40:55 crc kubenswrapper[4784]: I0123 07:40:55.441491 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-884l2" event={"ID":"77990d86-a028-40df-96fb-1ea611f3aefb","Type":"ContainerStarted","Data":"d399262e600bc4236b09b45b6ebea1d7fe194a3bb9e27506ddc7d196edd524ee"} Jan 23 07:40:57 crc kubenswrapper[4784]: I0123 07:40:57.470039 4784 generic.go:334] "Generic (PLEG): container finished" podID="77990d86-a028-40df-96fb-1ea611f3aefb" containerID="d399262e600bc4236b09b45b6ebea1d7fe194a3bb9e27506ddc7d196edd524ee" exitCode=0 Jan 23 07:40:57 crc kubenswrapper[4784]: I0123 07:40:57.470104 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-884l2" event={"ID":"77990d86-a028-40df-96fb-1ea611f3aefb","Type":"ContainerDied","Data":"d399262e600bc4236b09b45b6ebea1d7fe194a3bb9e27506ddc7d196edd524ee"} Jan 23 07:40:59 crc kubenswrapper[4784]: I0123 07:40:59.495551 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-884l2" event={"ID":"77990d86-a028-40df-96fb-1ea611f3aefb","Type":"ContainerStarted","Data":"2393211fa89326e15b8796e3862ca69acc7df5e5c9355fddd279ee4e8df65ca3"} Jan 23 07:40:59 crc kubenswrapper[4784]: I0123 07:40:59.535015 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-884l2" podStartSLOduration=2.347364364 podStartE2EDuration="7.534992328s" podCreationTimestamp="2026-01-23 07:40:52 +0000 UTC" firstStartedPulling="2026-01-23 07:40:53.412030277 +0000 UTC m=+4856.644538251" lastFinishedPulling="2026-01-23 07:40:58.599658211 +0000 UTC m=+4861.832166215" observedRunningTime="2026-01-23 07:40:59.526260653 +0000 UTC m=+4862.758768647" watchObservedRunningTime="2026-01-23 07:40:59.534992328 +0000 UTC m=+4862.767500312" Jan 23 07:41:02 crc kubenswrapper[4784]: I0123 07:41:02.538390 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:41:02 crc kubenswrapper[4784]: I0123 07:41:02.539117 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:41:03 crc kubenswrapper[4784]: I0123 07:41:03.612342 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-884l2" podUID="77990d86-a028-40df-96fb-1ea611f3aefb" containerName="registry-server" probeResult="failure" output=< Jan 23 07:41:03 crc kubenswrapper[4784]: timeout: failed to connect service ":50051" within 1s Jan 23 07:41:03 crc kubenswrapper[4784]: > Jan 23 07:41:12 crc kubenswrapper[4784]: I0123 07:41:12.624531 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:41:12 crc kubenswrapper[4784]: I0123 07:41:12.717474 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:41:12 crc kubenswrapper[4784]: I0123 07:41:12.885721 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-884l2"] Jan 23 07:41:13 crc kubenswrapper[4784]: I0123 07:41:13.693725 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-884l2" podUID="77990d86-a028-40df-96fb-1ea611f3aefb" containerName="registry-server" containerID="cri-o://2393211fa89326e15b8796e3862ca69acc7df5e5c9355fddd279ee4e8df65ca3" gracePeriod=2 Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.336476 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.460696 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77990d86-a028-40df-96fb-1ea611f3aefb-catalog-content\") pod \"77990d86-a028-40df-96fb-1ea611f3aefb\" (UID: \"77990d86-a028-40df-96fb-1ea611f3aefb\") " Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.460907 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77990d86-a028-40df-96fb-1ea611f3aefb-utilities\") pod \"77990d86-a028-40df-96fb-1ea611f3aefb\" (UID: \"77990d86-a028-40df-96fb-1ea611f3aefb\") " Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.461147 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmw57\" (UniqueName: \"kubernetes.io/projected/77990d86-a028-40df-96fb-1ea611f3aefb-kube-api-access-qmw57\") pod \"77990d86-a028-40df-96fb-1ea611f3aefb\" (UID: \"77990d86-a028-40df-96fb-1ea611f3aefb\") " Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.462305 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77990d86-a028-40df-96fb-1ea611f3aefb-utilities" (OuterVolumeSpecName: "utilities") pod "77990d86-a028-40df-96fb-1ea611f3aefb" (UID: "77990d86-a028-40df-96fb-1ea611f3aefb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.463427 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77990d86-a028-40df-96fb-1ea611f3aefb-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.472305 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77990d86-a028-40df-96fb-1ea611f3aefb-kube-api-access-qmw57" (OuterVolumeSpecName: "kube-api-access-qmw57") pod "77990d86-a028-40df-96fb-1ea611f3aefb" (UID: "77990d86-a028-40df-96fb-1ea611f3aefb"). InnerVolumeSpecName "kube-api-access-qmw57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.565678 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmw57\" (UniqueName: \"kubernetes.io/projected/77990d86-a028-40df-96fb-1ea611f3aefb-kube-api-access-qmw57\") on node \"crc\" DevicePath \"\"" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.640616 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77990d86-a028-40df-96fb-1ea611f3aefb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77990d86-a028-40df-96fb-1ea611f3aefb" (UID: "77990d86-a028-40df-96fb-1ea611f3aefb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.667840 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77990d86-a028-40df-96fb-1ea611f3aefb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.708187 4784 generic.go:334] "Generic (PLEG): container finished" podID="77990d86-a028-40df-96fb-1ea611f3aefb" containerID="2393211fa89326e15b8796e3862ca69acc7df5e5c9355fddd279ee4e8df65ca3" exitCode=0 Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.708223 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-884l2" event={"ID":"77990d86-a028-40df-96fb-1ea611f3aefb","Type":"ContainerDied","Data":"2393211fa89326e15b8796e3862ca69acc7df5e5c9355fddd279ee4e8df65ca3"} Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.708250 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-884l2" event={"ID":"77990d86-a028-40df-96fb-1ea611f3aefb","Type":"ContainerDied","Data":"ec7154afc111b32b1c909647304e4f57e6e9bc19620d1453a44a9053f4c44654"} Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.708300 4784 scope.go:117] "RemoveContainer" containerID="2393211fa89326e15b8796e3862ca69acc7df5e5c9355fddd279ee4e8df65ca3" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.708971 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-884l2" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.731713 4784 scope.go:117] "RemoveContainer" containerID="d399262e600bc4236b09b45b6ebea1d7fe194a3bb9e27506ddc7d196edd524ee" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.770032 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-884l2"] Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.780826 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-884l2"] Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.792344 4784 scope.go:117] "RemoveContainer" containerID="a74c10a523b1bc5a636d1cd81bc949798951ee31ed5034b16cb32d2564ec41dc" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.831599 4784 scope.go:117] "RemoveContainer" containerID="2393211fa89326e15b8796e3862ca69acc7df5e5c9355fddd279ee4e8df65ca3" Jan 23 07:41:14 crc kubenswrapper[4784]: E0123 07:41:14.832241 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2393211fa89326e15b8796e3862ca69acc7df5e5c9355fddd279ee4e8df65ca3\": container with ID starting with 2393211fa89326e15b8796e3862ca69acc7df5e5c9355fddd279ee4e8df65ca3 not found: ID does not exist" containerID="2393211fa89326e15b8796e3862ca69acc7df5e5c9355fddd279ee4e8df65ca3" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.832361 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2393211fa89326e15b8796e3862ca69acc7df5e5c9355fddd279ee4e8df65ca3"} err="failed to get container status \"2393211fa89326e15b8796e3862ca69acc7df5e5c9355fddd279ee4e8df65ca3\": rpc error: code = NotFound desc = could not find container \"2393211fa89326e15b8796e3862ca69acc7df5e5c9355fddd279ee4e8df65ca3\": container with ID starting with 2393211fa89326e15b8796e3862ca69acc7df5e5c9355fddd279ee4e8df65ca3 not found: ID does not exist" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.832441 4784 scope.go:117] "RemoveContainer" containerID="d399262e600bc4236b09b45b6ebea1d7fe194a3bb9e27506ddc7d196edd524ee" Jan 23 07:41:14 crc kubenswrapper[4784]: E0123 07:41:14.833001 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d399262e600bc4236b09b45b6ebea1d7fe194a3bb9e27506ddc7d196edd524ee\": container with ID starting with d399262e600bc4236b09b45b6ebea1d7fe194a3bb9e27506ddc7d196edd524ee not found: ID does not exist" containerID="d399262e600bc4236b09b45b6ebea1d7fe194a3bb9e27506ddc7d196edd524ee" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.833097 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d399262e600bc4236b09b45b6ebea1d7fe194a3bb9e27506ddc7d196edd524ee"} err="failed to get container status \"d399262e600bc4236b09b45b6ebea1d7fe194a3bb9e27506ddc7d196edd524ee\": rpc error: code = NotFound desc = could not find container \"d399262e600bc4236b09b45b6ebea1d7fe194a3bb9e27506ddc7d196edd524ee\": container with ID starting with d399262e600bc4236b09b45b6ebea1d7fe194a3bb9e27506ddc7d196edd524ee not found: ID does not exist" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.833178 4784 scope.go:117] "RemoveContainer" containerID="a74c10a523b1bc5a636d1cd81bc949798951ee31ed5034b16cb32d2564ec41dc" Jan 23 07:41:14 crc kubenswrapper[4784]: E0123 07:41:14.833572 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a74c10a523b1bc5a636d1cd81bc949798951ee31ed5034b16cb32d2564ec41dc\": container with ID starting with a74c10a523b1bc5a636d1cd81bc949798951ee31ed5034b16cb32d2564ec41dc not found: ID does not exist" containerID="a74c10a523b1bc5a636d1cd81bc949798951ee31ed5034b16cb32d2564ec41dc" Jan 23 07:41:14 crc kubenswrapper[4784]: I0123 07:41:14.833644 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a74c10a523b1bc5a636d1cd81bc949798951ee31ed5034b16cb32d2564ec41dc"} err="failed to get container status \"a74c10a523b1bc5a636d1cd81bc949798951ee31ed5034b16cb32d2564ec41dc\": rpc error: code = NotFound desc = could not find container \"a74c10a523b1bc5a636d1cd81bc949798951ee31ed5034b16cb32d2564ec41dc\": container with ID starting with a74c10a523b1bc5a636d1cd81bc949798951ee31ed5034b16cb32d2564ec41dc not found: ID does not exist" Jan 23 07:41:15 crc kubenswrapper[4784]: I0123 07:41:15.269447 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77990d86-a028-40df-96fb-1ea611f3aefb" path="/var/lib/kubelet/pods/77990d86-a028-40df-96fb-1ea611f3aefb/volumes" Jan 23 07:41:23 crc kubenswrapper[4784]: I0123 07:41:23.603248 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:41:23 crc kubenswrapper[4784]: I0123 07:41:23.603911 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:41:53 crc kubenswrapper[4784]: I0123 07:41:53.604191 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:41:53 crc kubenswrapper[4784]: I0123 07:41:53.604906 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:42:23 crc kubenswrapper[4784]: I0123 07:42:23.604204 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:42:23 crc kubenswrapper[4784]: I0123 07:42:23.605020 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:42:23 crc kubenswrapper[4784]: I0123 07:42:23.605090 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 07:42:23 crc kubenswrapper[4784]: I0123 07:42:23.606036 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f2bad361834119d810d115649299c2d95460097ac625999f8513e258e612407"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:42:23 crc kubenswrapper[4784]: I0123 07:42:23.606142 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://7f2bad361834119d810d115649299c2d95460097ac625999f8513e258e612407" gracePeriod=600 Jan 23 07:42:24 crc kubenswrapper[4784]: I0123 07:42:24.560078 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="7f2bad361834119d810d115649299c2d95460097ac625999f8513e258e612407" exitCode=0 Jan 23 07:42:24 crc kubenswrapper[4784]: I0123 07:42:24.560215 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"7f2bad361834119d810d115649299c2d95460097ac625999f8513e258e612407"} Jan 23 07:42:24 crc kubenswrapper[4784]: I0123 07:42:24.560746 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65"} Jan 23 07:42:24 crc kubenswrapper[4784]: I0123 07:42:24.560787 4784 scope.go:117] "RemoveContainer" containerID="8edb2af4056a969720e27a85a5acfa3cc2c6f48e9d5b210aaae2ddb4a48fcba1" Jan 23 07:44:20 crc kubenswrapper[4784]: I0123 07:44:20.772462 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-856bb5496c-5hkpt" podUID="bfac942c-ab7e-42a0-8091-29079fd4da0e" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 23 07:44:23 crc kubenswrapper[4784]: I0123 07:44:23.603439 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:44:23 crc kubenswrapper[4784]: I0123 07:44:23.603903 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:44:53 crc kubenswrapper[4784]: I0123 07:44:53.603666 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:44:53 crc kubenswrapper[4784]: I0123 07:44:53.604279 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.183220 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q"] Jan 23 07:45:00 crc kubenswrapper[4784]: E0123 07:45:00.184306 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77990d86-a028-40df-96fb-1ea611f3aefb" containerName="extract-utilities" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.184350 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="77990d86-a028-40df-96fb-1ea611f3aefb" containerName="extract-utilities" Jan 23 07:45:00 crc kubenswrapper[4784]: E0123 07:45:00.184386 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77990d86-a028-40df-96fb-1ea611f3aefb" containerName="extract-content" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.184394 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="77990d86-a028-40df-96fb-1ea611f3aefb" containerName="extract-content" Jan 23 07:45:00 crc kubenswrapper[4784]: E0123 07:45:00.184410 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77990d86-a028-40df-96fb-1ea611f3aefb" containerName="registry-server" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.184421 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="77990d86-a028-40df-96fb-1ea611f3aefb" containerName="registry-server" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.184707 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="77990d86-a028-40df-96fb-1ea611f3aefb" containerName="registry-server" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.185588 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.189454 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.189724 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.200633 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q"] Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.263211 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdk4n\" (UniqueName: \"kubernetes.io/projected/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-kube-api-access-vdk4n\") pod \"collect-profiles-29485905-rvv7q\" (UID: \"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.263259 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-secret-volume\") pod \"collect-profiles-29485905-rvv7q\" (UID: \"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.263413 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-config-volume\") pod \"collect-profiles-29485905-rvv7q\" (UID: \"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.365914 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-config-volume\") pod \"collect-profiles-29485905-rvv7q\" (UID: \"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.366444 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdk4n\" (UniqueName: \"kubernetes.io/projected/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-kube-api-access-vdk4n\") pod \"collect-profiles-29485905-rvv7q\" (UID: \"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.366480 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-secret-volume\") pod \"collect-profiles-29485905-rvv7q\" (UID: \"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.367644 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-config-volume\") pod \"collect-profiles-29485905-rvv7q\" (UID: \"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.380421 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-secret-volume\") pod \"collect-profiles-29485905-rvv7q\" (UID: \"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.389682 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdk4n\" (UniqueName: \"kubernetes.io/projected/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-kube-api-access-vdk4n\") pod \"collect-profiles-29485905-rvv7q\" (UID: \"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" Jan 23 07:45:00 crc kubenswrapper[4784]: I0123 07:45:00.517467 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" Jan 23 07:45:01 crc kubenswrapper[4784]: I0123 07:45:01.094364 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q"] Jan 23 07:45:02 crc kubenswrapper[4784]: I0123 07:45:02.007912 4784 generic.go:334] "Generic (PLEG): container finished" podID="ef7ebc1a-8154-476d-bb6d-45d5efef1f0b" containerID="d02e16d3ca45fce5147a4f0094134f01018a6e4766c6ef757bc3e37e97db96a8" exitCode=0 Jan 23 07:45:02 crc kubenswrapper[4784]: I0123 07:45:02.007999 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" event={"ID":"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b","Type":"ContainerDied","Data":"d02e16d3ca45fce5147a4f0094134f01018a6e4766c6ef757bc3e37e97db96a8"} Jan 23 07:45:02 crc kubenswrapper[4784]: I0123 07:45:02.008384 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" event={"ID":"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b","Type":"ContainerStarted","Data":"17e84a324c09050149692817fadf633e94e1fedb2093a900eb223ee65c0709c1"} Jan 23 07:45:03 crc kubenswrapper[4784]: I0123 07:45:03.569650 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" Jan 23 07:45:03 crc kubenswrapper[4784]: I0123 07:45:03.660332 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-secret-volume\") pod \"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b\" (UID: \"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b\") " Jan 23 07:45:03 crc kubenswrapper[4784]: I0123 07:45:03.660402 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdk4n\" (UniqueName: \"kubernetes.io/projected/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-kube-api-access-vdk4n\") pod \"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b\" (UID: \"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b\") " Jan 23 07:45:03 crc kubenswrapper[4784]: I0123 07:45:03.660498 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-config-volume\") pod \"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b\" (UID: \"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b\") " Jan 23 07:45:03 crc kubenswrapper[4784]: I0123 07:45:03.662162 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-config-volume" (OuterVolumeSpecName: "config-volume") pod "ef7ebc1a-8154-476d-bb6d-45d5efef1f0b" (UID: "ef7ebc1a-8154-476d-bb6d-45d5efef1f0b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:45:03 crc kubenswrapper[4784]: I0123 07:45:03.669598 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ef7ebc1a-8154-476d-bb6d-45d5efef1f0b" (UID: "ef7ebc1a-8154-476d-bb6d-45d5efef1f0b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:45:03 crc kubenswrapper[4784]: I0123 07:45:03.673010 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-kube-api-access-vdk4n" (OuterVolumeSpecName: "kube-api-access-vdk4n") pod "ef7ebc1a-8154-476d-bb6d-45d5efef1f0b" (UID: "ef7ebc1a-8154-476d-bb6d-45d5efef1f0b"). InnerVolumeSpecName "kube-api-access-vdk4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:45:03 crc kubenswrapper[4784]: I0123 07:45:03.763910 4784 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:45:03 crc kubenswrapper[4784]: I0123 07:45:03.763940 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdk4n\" (UniqueName: \"kubernetes.io/projected/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-kube-api-access-vdk4n\") on node \"crc\" DevicePath \"\"" Jan 23 07:45:03 crc kubenswrapper[4784]: I0123 07:45:03.763950 4784 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef7ebc1a-8154-476d-bb6d-45d5efef1f0b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:45:04 crc kubenswrapper[4784]: I0123 07:45:04.027565 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" event={"ID":"ef7ebc1a-8154-476d-bb6d-45d5efef1f0b","Type":"ContainerDied","Data":"17e84a324c09050149692817fadf633e94e1fedb2093a900eb223ee65c0709c1"} Jan 23 07:45:04 crc kubenswrapper[4784]: I0123 07:45:04.027604 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e84a324c09050149692817fadf633e94e1fedb2093a900eb223ee65c0709c1" Jan 23 07:45:04 crc kubenswrapper[4784]: I0123 07:45:04.027637 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-rvv7q" Jan 23 07:45:04 crc kubenswrapper[4784]: I0123 07:45:04.684436 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj"] Jan 23 07:45:04 crc kubenswrapper[4784]: I0123 07:45:04.697823 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485860-5wsnj"] Jan 23 07:45:05 crc kubenswrapper[4784]: I0123 07:45:05.268002 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36556d0b-98ff-4f56-944e-a8d9c5baa9e0" path="/var/lib/kubelet/pods/36556d0b-98ff-4f56-944e-a8d9c5baa9e0/volumes" Jan 23 07:45:23 crc kubenswrapper[4784]: I0123 07:45:23.603882 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:45:23 crc kubenswrapper[4784]: I0123 07:45:23.604593 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:45:23 crc kubenswrapper[4784]: I0123 07:45:23.604664 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 07:45:23 crc kubenswrapper[4784]: I0123 07:45:23.607130 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:45:23 crc kubenswrapper[4784]: I0123 07:45:23.607277 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" gracePeriod=600 Jan 23 07:45:23 crc kubenswrapper[4784]: E0123 07:45:23.754266 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:45:24 crc kubenswrapper[4784]: I0123 07:45:24.252238 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" exitCode=0 Jan 23 07:45:24 crc kubenswrapper[4784]: I0123 07:45:24.252302 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65"} Jan 23 07:45:24 crc kubenswrapper[4784]: I0123 07:45:24.252368 4784 scope.go:117] "RemoveContainer" containerID="7f2bad361834119d810d115649299c2d95460097ac625999f8513e258e612407" Jan 23 07:45:24 crc kubenswrapper[4784]: I0123 07:45:24.253847 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:45:24 crc kubenswrapper[4784]: E0123 07:45:24.254256 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:45:37 crc kubenswrapper[4784]: I0123 07:45:37.267364 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:45:37 crc kubenswrapper[4784]: E0123 07:45:37.268672 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:45:42 crc kubenswrapper[4784]: I0123 07:45:42.346036 4784 scope.go:117] "RemoveContainer" containerID="1c17c3c3ace1a389110c7e4f8cd8aaaf877ed6ca54466a04c37fa06226826dad" Jan 23 07:45:50 crc kubenswrapper[4784]: I0123 07:45:50.254387 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:45:50 crc kubenswrapper[4784]: E0123 07:45:50.255587 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:46:05 crc kubenswrapper[4784]: I0123 07:46:05.254687 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:46:05 crc kubenswrapper[4784]: E0123 07:46:05.255804 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:46:18 crc kubenswrapper[4784]: I0123 07:46:18.254208 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:46:18 crc kubenswrapper[4784]: E0123 07:46:18.255295 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:46:30 crc kubenswrapper[4784]: I0123 07:46:30.272377 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:46:30 crc kubenswrapper[4784]: E0123 07:46:30.274112 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:46:43 crc kubenswrapper[4784]: I0123 07:46:43.254683 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:46:43 crc kubenswrapper[4784]: E0123 07:46:43.255723 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:46:58 crc kubenswrapper[4784]: I0123 07:46:58.254487 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:46:58 crc kubenswrapper[4784]: E0123 07:46:58.255920 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:47:11 crc kubenswrapper[4784]: I0123 07:47:11.254987 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:47:11 crc kubenswrapper[4784]: E0123 07:47:11.256232 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:47:23 crc kubenswrapper[4784]: I0123 07:47:23.254225 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:47:23 crc kubenswrapper[4784]: E0123 07:47:23.255203 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:47:38 crc kubenswrapper[4784]: I0123 07:47:38.254391 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:47:38 crc kubenswrapper[4784]: E0123 07:47:38.255542 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:47:52 crc kubenswrapper[4784]: I0123 07:47:52.254656 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:47:52 crc kubenswrapper[4784]: E0123 07:47:52.255936 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:48:03 crc kubenswrapper[4784]: I0123 07:48:03.255233 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:48:03 crc kubenswrapper[4784]: E0123 07:48:03.256705 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:48:11 crc kubenswrapper[4784]: I0123 07:48:11.264439 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bhdzm"] Jan 23 07:48:11 crc kubenswrapper[4784]: E0123 07:48:11.265320 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7ebc1a-8154-476d-bb6d-45d5efef1f0b" containerName="collect-profiles" Jan 23 07:48:11 crc kubenswrapper[4784]: I0123 07:48:11.265350 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7ebc1a-8154-476d-bb6d-45d5efef1f0b" containerName="collect-profiles" Jan 23 07:48:11 crc kubenswrapper[4784]: I0123 07:48:11.265652 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef7ebc1a-8154-476d-bb6d-45d5efef1f0b" containerName="collect-profiles" Jan 23 07:48:11 crc kubenswrapper[4784]: I0123 07:48:11.267520 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:11 crc kubenswrapper[4784]: I0123 07:48:11.296292 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhdzm"] Jan 23 07:48:11 crc kubenswrapper[4784]: I0123 07:48:11.315848 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg9zm\" (UniqueName: \"kubernetes.io/projected/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-kube-api-access-gg9zm\") pod \"redhat-marketplace-bhdzm\" (UID: \"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b\") " pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:11 crc kubenswrapper[4784]: I0123 07:48:11.315974 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-utilities\") pod \"redhat-marketplace-bhdzm\" (UID: \"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b\") " pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:11 crc kubenswrapper[4784]: I0123 07:48:11.316089 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-catalog-content\") pod \"redhat-marketplace-bhdzm\" (UID: \"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b\") " pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:11 crc kubenswrapper[4784]: I0123 07:48:11.418171 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-catalog-content\") pod \"redhat-marketplace-bhdzm\" (UID: \"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b\") " pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:11 crc kubenswrapper[4784]: I0123 07:48:11.418581 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-catalog-content\") pod \"redhat-marketplace-bhdzm\" (UID: \"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b\") " pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:11 crc kubenswrapper[4784]: I0123 07:48:11.418806 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg9zm\" (UniqueName: \"kubernetes.io/projected/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-kube-api-access-gg9zm\") pod \"redhat-marketplace-bhdzm\" (UID: \"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b\") " pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:11 crc kubenswrapper[4784]: I0123 07:48:11.419146 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-utilities\") pod \"redhat-marketplace-bhdzm\" (UID: \"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b\") " pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:11 crc kubenswrapper[4784]: I0123 07:48:11.419405 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-utilities\") pod \"redhat-marketplace-bhdzm\" (UID: \"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b\") " pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:11 crc kubenswrapper[4784]: I0123 07:48:11.436628 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg9zm\" (UniqueName: \"kubernetes.io/projected/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-kube-api-access-gg9zm\") pod \"redhat-marketplace-bhdzm\" (UID: \"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b\") " pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:11 crc kubenswrapper[4784]: I0123 07:48:11.588826 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:12 crc kubenswrapper[4784]: I0123 07:48:12.081901 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhdzm"] Jan 23 07:48:12 crc kubenswrapper[4784]: I0123 07:48:12.290235 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhdzm" event={"ID":"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b","Type":"ContainerStarted","Data":"0b5e7a0583fbc16171cc7e92b51d9ef501b90ac217c7a7088ed09e3089e7b5ac"} Jan 23 07:48:13 crc kubenswrapper[4784]: I0123 07:48:13.306317 4784 generic.go:334] "Generic (PLEG): container finished" podID="58bfbbda-6aae-43fb-b99d-ecb829f6ef2b" containerID="2ea2d057361213e8e63d6af1bcaa44a4137c011c94734eea3e8cbf2e362bba66" exitCode=0 Jan 23 07:48:13 crc kubenswrapper[4784]: I0123 07:48:13.306402 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhdzm" event={"ID":"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b","Type":"ContainerDied","Data":"2ea2d057361213e8e63d6af1bcaa44a4137c011c94734eea3e8cbf2e362bba66"} Jan 23 07:48:13 crc kubenswrapper[4784]: I0123 07:48:13.311677 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:48:16 crc kubenswrapper[4784]: I0123 07:48:16.345858 4784 generic.go:334] "Generic (PLEG): container finished" podID="58bfbbda-6aae-43fb-b99d-ecb829f6ef2b" containerID="5f41424ac80caa28d4d25ae3706dbef7172dc1a84b95b0243bf2c74110f24a12" exitCode=0 Jan 23 07:48:16 crc kubenswrapper[4784]: I0123 07:48:16.345921 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhdzm" event={"ID":"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b","Type":"ContainerDied","Data":"5f41424ac80caa28d4d25ae3706dbef7172dc1a84b95b0243bf2c74110f24a12"} Jan 23 07:48:17 crc kubenswrapper[4784]: I0123 07:48:17.259851 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:48:17 crc kubenswrapper[4784]: E0123 07:48:17.260460 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:48:17 crc kubenswrapper[4784]: I0123 07:48:17.363633 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhdzm" event={"ID":"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b","Type":"ContainerStarted","Data":"d2c3a118c3b6a7fb9f2393ce247f265d988ccfecc0c2bf22ca9281640d79a5b7"} Jan 23 07:48:21 crc kubenswrapper[4784]: I0123 07:48:21.589725 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:21 crc kubenswrapper[4784]: I0123 07:48:21.590534 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:21 crc kubenswrapper[4784]: I0123 07:48:21.667610 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:21 crc kubenswrapper[4784]: I0123 07:48:21.697054 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bhdzm" podStartSLOduration=7.181233846 podStartE2EDuration="10.696994006s" podCreationTimestamp="2026-01-23 07:48:11 +0000 UTC" firstStartedPulling="2026-01-23 07:48:13.311247191 +0000 UTC m=+5296.543755195" lastFinishedPulling="2026-01-23 07:48:16.827007361 +0000 UTC m=+5300.059515355" observedRunningTime="2026-01-23 07:48:17.396528388 +0000 UTC m=+5300.629036392" watchObservedRunningTime="2026-01-23 07:48:21.696994006 +0000 UTC m=+5304.929502020" Jan 23 07:48:22 crc kubenswrapper[4784]: I0123 07:48:22.493205 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:22 crc kubenswrapper[4784]: I0123 07:48:22.564304 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhdzm"] Jan 23 07:48:24 crc kubenswrapper[4784]: I0123 07:48:24.441600 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bhdzm" podUID="58bfbbda-6aae-43fb-b99d-ecb829f6ef2b" containerName="registry-server" containerID="cri-o://d2c3a118c3b6a7fb9f2393ce247f265d988ccfecc0c2bf22ca9281640d79a5b7" gracePeriod=2 Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.074240 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.219350 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg9zm\" (UniqueName: \"kubernetes.io/projected/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-kube-api-access-gg9zm\") pod \"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b\" (UID: \"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b\") " Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.219394 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-catalog-content\") pod \"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b\" (UID: \"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b\") " Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.219581 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-utilities\") pod \"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b\" (UID: \"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b\") " Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.220388 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-utilities" (OuterVolumeSpecName: "utilities") pod "58bfbbda-6aae-43fb-b99d-ecb829f6ef2b" (UID: "58bfbbda-6aae-43fb-b99d-ecb829f6ef2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.227593 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-kube-api-access-gg9zm" (OuterVolumeSpecName: "kube-api-access-gg9zm") pod "58bfbbda-6aae-43fb-b99d-ecb829f6ef2b" (UID: "58bfbbda-6aae-43fb-b99d-ecb829f6ef2b"). InnerVolumeSpecName "kube-api-access-gg9zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.243640 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58bfbbda-6aae-43fb-b99d-ecb829f6ef2b" (UID: "58bfbbda-6aae-43fb-b99d-ecb829f6ef2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.322710 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.322737 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg9zm\" (UniqueName: \"kubernetes.io/projected/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-kube-api-access-gg9zm\") on node \"crc\" DevicePath \"\"" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.322760 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.456321 4784 generic.go:334] "Generic (PLEG): container finished" podID="58bfbbda-6aae-43fb-b99d-ecb829f6ef2b" containerID="d2c3a118c3b6a7fb9f2393ce247f265d988ccfecc0c2bf22ca9281640d79a5b7" exitCode=0 Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.456382 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhdzm" event={"ID":"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b","Type":"ContainerDied","Data":"d2c3a118c3b6a7fb9f2393ce247f265d988ccfecc0c2bf22ca9281640d79a5b7"} Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.456425 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhdzm" event={"ID":"58bfbbda-6aae-43fb-b99d-ecb829f6ef2b","Type":"ContainerDied","Data":"0b5e7a0583fbc16171cc7e92b51d9ef501b90ac217c7a7088ed09e3089e7b5ac"} Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.456454 4784 scope.go:117] "RemoveContainer" containerID="d2c3a118c3b6a7fb9f2393ce247f265d988ccfecc0c2bf22ca9281640d79a5b7" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.456639 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhdzm" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.486344 4784 scope.go:117] "RemoveContainer" containerID="5f41424ac80caa28d4d25ae3706dbef7172dc1a84b95b0243bf2c74110f24a12" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.495597 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhdzm"] Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.507589 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhdzm"] Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.514118 4784 scope.go:117] "RemoveContainer" containerID="2ea2d057361213e8e63d6af1bcaa44a4137c011c94734eea3e8cbf2e362bba66" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.571624 4784 scope.go:117] "RemoveContainer" containerID="d2c3a118c3b6a7fb9f2393ce247f265d988ccfecc0c2bf22ca9281640d79a5b7" Jan 23 07:48:25 crc kubenswrapper[4784]: E0123 07:48:25.572263 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2c3a118c3b6a7fb9f2393ce247f265d988ccfecc0c2bf22ca9281640d79a5b7\": container with ID starting with d2c3a118c3b6a7fb9f2393ce247f265d988ccfecc0c2bf22ca9281640d79a5b7 not found: ID does not exist" containerID="d2c3a118c3b6a7fb9f2393ce247f265d988ccfecc0c2bf22ca9281640d79a5b7" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.572304 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2c3a118c3b6a7fb9f2393ce247f265d988ccfecc0c2bf22ca9281640d79a5b7"} err="failed to get container status \"d2c3a118c3b6a7fb9f2393ce247f265d988ccfecc0c2bf22ca9281640d79a5b7\": rpc error: code = NotFound desc = could not find container \"d2c3a118c3b6a7fb9f2393ce247f265d988ccfecc0c2bf22ca9281640d79a5b7\": container with ID starting with d2c3a118c3b6a7fb9f2393ce247f265d988ccfecc0c2bf22ca9281640d79a5b7 not found: ID does not exist" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.572327 4784 scope.go:117] "RemoveContainer" containerID="5f41424ac80caa28d4d25ae3706dbef7172dc1a84b95b0243bf2c74110f24a12" Jan 23 07:48:25 crc kubenswrapper[4784]: E0123 07:48:25.572778 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f41424ac80caa28d4d25ae3706dbef7172dc1a84b95b0243bf2c74110f24a12\": container with ID starting with 5f41424ac80caa28d4d25ae3706dbef7172dc1a84b95b0243bf2c74110f24a12 not found: ID does not exist" containerID="5f41424ac80caa28d4d25ae3706dbef7172dc1a84b95b0243bf2c74110f24a12" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.572819 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f41424ac80caa28d4d25ae3706dbef7172dc1a84b95b0243bf2c74110f24a12"} err="failed to get container status \"5f41424ac80caa28d4d25ae3706dbef7172dc1a84b95b0243bf2c74110f24a12\": rpc error: code = NotFound desc = could not find container \"5f41424ac80caa28d4d25ae3706dbef7172dc1a84b95b0243bf2c74110f24a12\": container with ID starting with 5f41424ac80caa28d4d25ae3706dbef7172dc1a84b95b0243bf2c74110f24a12 not found: ID does not exist" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.572846 4784 scope.go:117] "RemoveContainer" containerID="2ea2d057361213e8e63d6af1bcaa44a4137c011c94734eea3e8cbf2e362bba66" Jan 23 07:48:25 crc kubenswrapper[4784]: E0123 07:48:25.573954 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ea2d057361213e8e63d6af1bcaa44a4137c011c94734eea3e8cbf2e362bba66\": container with ID starting with 2ea2d057361213e8e63d6af1bcaa44a4137c011c94734eea3e8cbf2e362bba66 not found: ID does not exist" containerID="2ea2d057361213e8e63d6af1bcaa44a4137c011c94734eea3e8cbf2e362bba66" Jan 23 07:48:25 crc kubenswrapper[4784]: I0123 07:48:25.574004 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ea2d057361213e8e63d6af1bcaa44a4137c011c94734eea3e8cbf2e362bba66"} err="failed to get container status \"2ea2d057361213e8e63d6af1bcaa44a4137c011c94734eea3e8cbf2e362bba66\": rpc error: code = NotFound desc = could not find container \"2ea2d057361213e8e63d6af1bcaa44a4137c011c94734eea3e8cbf2e362bba66\": container with ID starting with 2ea2d057361213e8e63d6af1bcaa44a4137c011c94734eea3e8cbf2e362bba66 not found: ID does not exist" Jan 23 07:48:27 crc kubenswrapper[4784]: I0123 07:48:27.280982 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58bfbbda-6aae-43fb-b99d-ecb829f6ef2b" path="/var/lib/kubelet/pods/58bfbbda-6aae-43fb-b99d-ecb829f6ef2b/volumes" Jan 23 07:48:32 crc kubenswrapper[4784]: I0123 07:48:32.255031 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:48:32 crc kubenswrapper[4784]: E0123 07:48:32.255849 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:48:47 crc kubenswrapper[4784]: I0123 07:48:47.270280 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:48:47 crc kubenswrapper[4784]: E0123 07:48:47.271415 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:49:01 crc kubenswrapper[4784]: I0123 07:49:01.254991 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:49:01 crc kubenswrapper[4784]: E0123 07:49:01.256530 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:49:12 crc kubenswrapper[4784]: I0123 07:49:12.254529 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:49:12 crc kubenswrapper[4784]: E0123 07:49:12.255928 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:49:25 crc kubenswrapper[4784]: I0123 07:49:25.255008 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:49:25 crc kubenswrapper[4784]: E0123 07:49:25.256158 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:49:40 crc kubenswrapper[4784]: I0123 07:49:40.255139 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:49:40 crc kubenswrapper[4784]: E0123 07:49:40.256241 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:49:55 crc kubenswrapper[4784]: I0123 07:49:55.254248 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:49:55 crc kubenswrapper[4784]: E0123 07:49:55.255222 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:50:07 crc kubenswrapper[4784]: I0123 07:50:07.269145 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:50:07 crc kubenswrapper[4784]: E0123 07:50:07.270622 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:50:20 crc kubenswrapper[4784]: I0123 07:50:20.254021 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:50:20 crc kubenswrapper[4784]: E0123 07:50:20.255199 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:50:31 crc kubenswrapper[4784]: I0123 07:50:31.989058 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lch54"] Jan 23 07:50:31 crc kubenswrapper[4784]: E0123 07:50:31.991239 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58bfbbda-6aae-43fb-b99d-ecb829f6ef2b" containerName="registry-server" Jan 23 07:50:31 crc kubenswrapper[4784]: I0123 07:50:31.991292 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="58bfbbda-6aae-43fb-b99d-ecb829f6ef2b" containerName="registry-server" Jan 23 07:50:31 crc kubenswrapper[4784]: E0123 07:50:31.991334 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58bfbbda-6aae-43fb-b99d-ecb829f6ef2b" containerName="extract-content" Jan 23 07:50:31 crc kubenswrapper[4784]: I0123 07:50:31.991342 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="58bfbbda-6aae-43fb-b99d-ecb829f6ef2b" containerName="extract-content" Jan 23 07:50:31 crc kubenswrapper[4784]: E0123 07:50:31.991359 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58bfbbda-6aae-43fb-b99d-ecb829f6ef2b" containerName="extract-utilities" Jan 23 07:50:31 crc kubenswrapper[4784]: I0123 07:50:31.991367 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="58bfbbda-6aae-43fb-b99d-ecb829f6ef2b" containerName="extract-utilities" Jan 23 07:50:31 crc kubenswrapper[4784]: I0123 07:50:31.991643 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="58bfbbda-6aae-43fb-b99d-ecb829f6ef2b" containerName="registry-server" Jan 23 07:50:31 crc kubenswrapper[4784]: I0123 07:50:31.993381 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:32 crc kubenswrapper[4784]: I0123 07:50:32.015940 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lch54"] Jan 23 07:50:32 crc kubenswrapper[4784]: I0123 07:50:32.151311 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d1c8e51-c730-4523-9c48-e8dbc626c95d-utilities\") pod \"community-operators-lch54\" (UID: \"7d1c8e51-c730-4523-9c48-e8dbc626c95d\") " pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:32 crc kubenswrapper[4784]: I0123 07:50:32.152175 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm7mm\" (UniqueName: \"kubernetes.io/projected/7d1c8e51-c730-4523-9c48-e8dbc626c95d-kube-api-access-rm7mm\") pod \"community-operators-lch54\" (UID: \"7d1c8e51-c730-4523-9c48-e8dbc626c95d\") " pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:32 crc kubenswrapper[4784]: I0123 07:50:32.152360 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d1c8e51-c730-4523-9c48-e8dbc626c95d-catalog-content\") pod \"community-operators-lch54\" (UID: \"7d1c8e51-c730-4523-9c48-e8dbc626c95d\") " pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:32 crc kubenswrapper[4784]: I0123 07:50:32.253831 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:50:32 crc kubenswrapper[4784]: I0123 07:50:32.254399 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm7mm\" (UniqueName: \"kubernetes.io/projected/7d1c8e51-c730-4523-9c48-e8dbc626c95d-kube-api-access-rm7mm\") pod \"community-operators-lch54\" (UID: \"7d1c8e51-c730-4523-9c48-e8dbc626c95d\") " pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:32 crc kubenswrapper[4784]: I0123 07:50:32.254528 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d1c8e51-c730-4523-9c48-e8dbc626c95d-catalog-content\") pod \"community-operators-lch54\" (UID: \"7d1c8e51-c730-4523-9c48-e8dbc626c95d\") " pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:32 crc kubenswrapper[4784]: I0123 07:50:32.254584 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d1c8e51-c730-4523-9c48-e8dbc626c95d-utilities\") pod \"community-operators-lch54\" (UID: \"7d1c8e51-c730-4523-9c48-e8dbc626c95d\") " pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:32 crc kubenswrapper[4784]: I0123 07:50:32.255606 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d1c8e51-c730-4523-9c48-e8dbc626c95d-catalog-content\") pod \"community-operators-lch54\" (UID: \"7d1c8e51-c730-4523-9c48-e8dbc626c95d\") " pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:32 crc kubenswrapper[4784]: I0123 07:50:32.255739 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d1c8e51-c730-4523-9c48-e8dbc626c95d-utilities\") pod \"community-operators-lch54\" (UID: \"7d1c8e51-c730-4523-9c48-e8dbc626c95d\") " pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:32 crc kubenswrapper[4784]: I0123 07:50:32.282796 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm7mm\" (UniqueName: \"kubernetes.io/projected/7d1c8e51-c730-4523-9c48-e8dbc626c95d-kube-api-access-rm7mm\") pod \"community-operators-lch54\" (UID: \"7d1c8e51-c730-4523-9c48-e8dbc626c95d\") " pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:32 crc kubenswrapper[4784]: I0123 07:50:32.323661 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:33 crc kubenswrapper[4784]: I0123 07:50:33.487360 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lch54"] Jan 23 07:50:33 crc kubenswrapper[4784]: W0123 07:50:33.494207 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d1c8e51_c730_4523_9c48_e8dbc626c95d.slice/crio-d5a803af2b5a5448435bcc2e3599b7cbcb80cf03f906c7a47966cd5c93f7fdbb WatchSource:0}: Error finding container d5a803af2b5a5448435bcc2e3599b7cbcb80cf03f906c7a47966cd5c93f7fdbb: Status 404 returned error can't find the container with id d5a803af2b5a5448435bcc2e3599b7cbcb80cf03f906c7a47966cd5c93f7fdbb Jan 23 07:50:34 crc kubenswrapper[4784]: I0123 07:50:34.052199 4784 generic.go:334] "Generic (PLEG): container finished" podID="7d1c8e51-c730-4523-9c48-e8dbc626c95d" containerID="c6c645ef245e688b56da8b0c44fa4b7462a6fdb1c37c7305bc3f19a1d0d4d97b" exitCode=0 Jan 23 07:50:34 crc kubenswrapper[4784]: I0123 07:50:34.052242 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lch54" event={"ID":"7d1c8e51-c730-4523-9c48-e8dbc626c95d","Type":"ContainerDied","Data":"c6c645ef245e688b56da8b0c44fa4b7462a6fdb1c37c7305bc3f19a1d0d4d97b"} Jan 23 07:50:34 crc kubenswrapper[4784]: I0123 07:50:34.052604 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lch54" event={"ID":"7d1c8e51-c730-4523-9c48-e8dbc626c95d","Type":"ContainerStarted","Data":"d5a803af2b5a5448435bcc2e3599b7cbcb80cf03f906c7a47966cd5c93f7fdbb"} Jan 23 07:50:34 crc kubenswrapper[4784]: I0123 07:50:34.056845 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"8a4925ef8f367f766d6f89868babd919efb98fcf740c1549570297eb34a4c036"} Jan 23 07:50:36 crc kubenswrapper[4784]: I0123 07:50:36.101153 4784 generic.go:334] "Generic (PLEG): container finished" podID="7d1c8e51-c730-4523-9c48-e8dbc626c95d" containerID="caf1fcbd1f0620ea6104df2d9ace749fee7e8eb1783d456009a3cf737147d0c2" exitCode=0 Jan 23 07:50:36 crc kubenswrapper[4784]: I0123 07:50:36.102935 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lch54" event={"ID":"7d1c8e51-c730-4523-9c48-e8dbc626c95d","Type":"ContainerDied","Data":"caf1fcbd1f0620ea6104df2d9ace749fee7e8eb1783d456009a3cf737147d0c2"} Jan 23 07:50:37 crc kubenswrapper[4784]: I0123 07:50:37.116509 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lch54" event={"ID":"7d1c8e51-c730-4523-9c48-e8dbc626c95d","Type":"ContainerStarted","Data":"b155068579279bfcaec733e5b8bc301802454ea3ce8e262262b77803c1bbc819"} Jan 23 07:50:37 crc kubenswrapper[4784]: I0123 07:50:37.147070 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lch54" podStartSLOduration=3.673562828 podStartE2EDuration="6.146982772s" podCreationTimestamp="2026-01-23 07:50:31 +0000 UTC" firstStartedPulling="2026-01-23 07:50:34.055197934 +0000 UTC m=+5437.287705948" lastFinishedPulling="2026-01-23 07:50:36.528617908 +0000 UTC m=+5439.761125892" observedRunningTime="2026-01-23 07:50:37.135635804 +0000 UTC m=+5440.368143838" watchObservedRunningTime="2026-01-23 07:50:37.146982772 +0000 UTC m=+5440.379490776" Jan 23 07:50:37 crc kubenswrapper[4784]: I0123 07:50:37.360686 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pvvr4"] Jan 23 07:50:37 crc kubenswrapper[4784]: I0123 07:50:37.363658 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:37 crc kubenswrapper[4784]: I0123 07:50:37.373792 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pvvr4"] Jan 23 07:50:37 crc kubenswrapper[4784]: I0123 07:50:37.474389 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36881164-593f-484a-977d-abca9962673b-utilities\") pod \"certified-operators-pvvr4\" (UID: \"36881164-593f-484a-977d-abca9962673b\") " pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:37 crc kubenswrapper[4784]: I0123 07:50:37.474459 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36881164-593f-484a-977d-abca9962673b-catalog-content\") pod \"certified-operators-pvvr4\" (UID: \"36881164-593f-484a-977d-abca9962673b\") " pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:37 crc kubenswrapper[4784]: I0123 07:50:37.474641 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p7dx\" (UniqueName: \"kubernetes.io/projected/36881164-593f-484a-977d-abca9962673b-kube-api-access-7p7dx\") pod \"certified-operators-pvvr4\" (UID: \"36881164-593f-484a-977d-abca9962673b\") " pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:37 crc kubenswrapper[4784]: I0123 07:50:37.576846 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p7dx\" (UniqueName: \"kubernetes.io/projected/36881164-593f-484a-977d-abca9962673b-kube-api-access-7p7dx\") pod \"certified-operators-pvvr4\" (UID: \"36881164-593f-484a-977d-abca9962673b\") " pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:37 crc kubenswrapper[4784]: I0123 07:50:37.576964 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36881164-593f-484a-977d-abca9962673b-utilities\") pod \"certified-operators-pvvr4\" (UID: \"36881164-593f-484a-977d-abca9962673b\") " pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:37 crc kubenswrapper[4784]: I0123 07:50:37.577004 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36881164-593f-484a-977d-abca9962673b-catalog-content\") pod \"certified-operators-pvvr4\" (UID: \"36881164-593f-484a-977d-abca9962673b\") " pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:37 crc kubenswrapper[4784]: I0123 07:50:37.577570 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36881164-593f-484a-977d-abca9962673b-catalog-content\") pod \"certified-operators-pvvr4\" (UID: \"36881164-593f-484a-977d-abca9962673b\") " pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:37 crc kubenswrapper[4784]: I0123 07:50:37.578719 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36881164-593f-484a-977d-abca9962673b-utilities\") pod \"certified-operators-pvvr4\" (UID: \"36881164-593f-484a-977d-abca9962673b\") " pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:37 crc kubenswrapper[4784]: I0123 07:50:37.609670 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p7dx\" (UniqueName: \"kubernetes.io/projected/36881164-593f-484a-977d-abca9962673b-kube-api-access-7p7dx\") pod \"certified-operators-pvvr4\" (UID: \"36881164-593f-484a-977d-abca9962673b\") " pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:37 crc kubenswrapper[4784]: I0123 07:50:37.700144 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:38 crc kubenswrapper[4784]: I0123 07:50:38.260529 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pvvr4"] Jan 23 07:50:38 crc kubenswrapper[4784]: W0123 07:50:38.265123 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36881164_593f_484a_977d_abca9962673b.slice/crio-158f7563a656f0b92f15ae1c4d75db551aba43f3ac5e3eeceac1f77e67bd83a8 WatchSource:0}: Error finding container 158f7563a656f0b92f15ae1c4d75db551aba43f3ac5e3eeceac1f77e67bd83a8: Status 404 returned error can't find the container with id 158f7563a656f0b92f15ae1c4d75db551aba43f3ac5e3eeceac1f77e67bd83a8 Jan 23 07:50:39 crc kubenswrapper[4784]: I0123 07:50:39.142843 4784 generic.go:334] "Generic (PLEG): container finished" podID="36881164-593f-484a-977d-abca9962673b" containerID="51b6ae54235e45155e28ba851059aa1fbca824dce25c124fe7377816fc6d1623" exitCode=0 Jan 23 07:50:39 crc kubenswrapper[4784]: I0123 07:50:39.142908 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvvr4" event={"ID":"36881164-593f-484a-977d-abca9962673b","Type":"ContainerDied","Data":"51b6ae54235e45155e28ba851059aa1fbca824dce25c124fe7377816fc6d1623"} Jan 23 07:50:39 crc kubenswrapper[4784]: I0123 07:50:39.143320 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvvr4" event={"ID":"36881164-593f-484a-977d-abca9962673b","Type":"ContainerStarted","Data":"158f7563a656f0b92f15ae1c4d75db551aba43f3ac5e3eeceac1f77e67bd83a8"} Jan 23 07:50:40 crc kubenswrapper[4784]: I0123 07:50:40.165805 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvvr4" event={"ID":"36881164-593f-484a-977d-abca9962673b","Type":"ContainerStarted","Data":"90d445057cbf61bd58bfb188c6824365e60f6a90eafee5d9f0212f4b544b3209"} Jan 23 07:50:41 crc kubenswrapper[4784]: I0123 07:50:41.178221 4784 generic.go:334] "Generic (PLEG): container finished" podID="36881164-593f-484a-977d-abca9962673b" containerID="90d445057cbf61bd58bfb188c6824365e60f6a90eafee5d9f0212f4b544b3209" exitCode=0 Jan 23 07:50:41 crc kubenswrapper[4784]: I0123 07:50:41.178303 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvvr4" event={"ID":"36881164-593f-484a-977d-abca9962673b","Type":"ContainerDied","Data":"90d445057cbf61bd58bfb188c6824365e60f6a90eafee5d9f0212f4b544b3209"} Jan 23 07:50:42 crc kubenswrapper[4784]: I0123 07:50:42.191654 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvvr4" event={"ID":"36881164-593f-484a-977d-abca9962673b","Type":"ContainerStarted","Data":"f94a243beca07b83a7c41142a27ab1e59b093f44fabb56e5c1d35b9e1489f896"} Jan 23 07:50:42 crc kubenswrapper[4784]: I0123 07:50:42.216853 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pvvr4" podStartSLOduration=2.749712981 podStartE2EDuration="5.216831131s" podCreationTimestamp="2026-01-23 07:50:37 +0000 UTC" firstStartedPulling="2026-01-23 07:50:39.145916035 +0000 UTC m=+5442.378424049" lastFinishedPulling="2026-01-23 07:50:41.613034195 +0000 UTC m=+5444.845542199" observedRunningTime="2026-01-23 07:50:42.211727876 +0000 UTC m=+5445.444235880" watchObservedRunningTime="2026-01-23 07:50:42.216831131 +0000 UTC m=+5445.449339145" Jan 23 07:50:42 crc kubenswrapper[4784]: I0123 07:50:42.324997 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:42 crc kubenswrapper[4784]: I0123 07:50:42.325084 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:42 crc kubenswrapper[4784]: I0123 07:50:42.404181 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:43 crc kubenswrapper[4784]: I0123 07:50:43.290306 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:44 crc kubenswrapper[4784]: I0123 07:50:44.571011 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lch54"] Jan 23 07:50:45 crc kubenswrapper[4784]: I0123 07:50:45.228494 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lch54" podUID="7d1c8e51-c730-4523-9c48-e8dbc626c95d" containerName="registry-server" containerID="cri-o://b155068579279bfcaec733e5b8bc301802454ea3ce8e262262b77803c1bbc819" gracePeriod=2 Jan 23 07:50:46 crc kubenswrapper[4784]: I0123 07:50:46.243827 4784 generic.go:334] "Generic (PLEG): container finished" podID="7d1c8e51-c730-4523-9c48-e8dbc626c95d" containerID="b155068579279bfcaec733e5b8bc301802454ea3ce8e262262b77803c1bbc819" exitCode=0 Jan 23 07:50:46 crc kubenswrapper[4784]: I0123 07:50:46.243921 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lch54" event={"ID":"7d1c8e51-c730-4523-9c48-e8dbc626c95d","Type":"ContainerDied","Data":"b155068579279bfcaec733e5b8bc301802454ea3ce8e262262b77803c1bbc819"} Jan 23 07:50:46 crc kubenswrapper[4784]: I0123 07:50:46.244118 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lch54" event={"ID":"7d1c8e51-c730-4523-9c48-e8dbc626c95d","Type":"ContainerDied","Data":"d5a803af2b5a5448435bcc2e3599b7cbcb80cf03f906c7a47966cd5c93f7fdbb"} Jan 23 07:50:46 crc kubenswrapper[4784]: I0123 07:50:46.244137 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5a803af2b5a5448435bcc2e3599b7cbcb80cf03f906c7a47966cd5c93f7fdbb" Jan 23 07:50:46 crc kubenswrapper[4784]: I0123 07:50:46.415893 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:46 crc kubenswrapper[4784]: I0123 07:50:46.589218 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm7mm\" (UniqueName: \"kubernetes.io/projected/7d1c8e51-c730-4523-9c48-e8dbc626c95d-kube-api-access-rm7mm\") pod \"7d1c8e51-c730-4523-9c48-e8dbc626c95d\" (UID: \"7d1c8e51-c730-4523-9c48-e8dbc626c95d\") " Jan 23 07:50:46 crc kubenswrapper[4784]: I0123 07:50:46.589299 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d1c8e51-c730-4523-9c48-e8dbc626c95d-utilities\") pod \"7d1c8e51-c730-4523-9c48-e8dbc626c95d\" (UID: \"7d1c8e51-c730-4523-9c48-e8dbc626c95d\") " Jan 23 07:50:46 crc kubenswrapper[4784]: I0123 07:50:46.589418 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d1c8e51-c730-4523-9c48-e8dbc626c95d-catalog-content\") pod \"7d1c8e51-c730-4523-9c48-e8dbc626c95d\" (UID: \"7d1c8e51-c730-4523-9c48-e8dbc626c95d\") " Jan 23 07:50:46 crc kubenswrapper[4784]: I0123 07:50:46.592319 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d1c8e51-c730-4523-9c48-e8dbc626c95d-utilities" (OuterVolumeSpecName: "utilities") pod "7d1c8e51-c730-4523-9c48-e8dbc626c95d" (UID: "7d1c8e51-c730-4523-9c48-e8dbc626c95d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:50:46 crc kubenswrapper[4784]: I0123 07:50:46.599420 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d1c8e51-c730-4523-9c48-e8dbc626c95d-kube-api-access-rm7mm" (OuterVolumeSpecName: "kube-api-access-rm7mm") pod "7d1c8e51-c730-4523-9c48-e8dbc626c95d" (UID: "7d1c8e51-c730-4523-9c48-e8dbc626c95d"). InnerVolumeSpecName "kube-api-access-rm7mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:50:46 crc kubenswrapper[4784]: I0123 07:50:46.692299 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm7mm\" (UniqueName: \"kubernetes.io/projected/7d1c8e51-c730-4523-9c48-e8dbc626c95d-kube-api-access-rm7mm\") on node \"crc\" DevicePath \"\"" Jan 23 07:50:46 crc kubenswrapper[4784]: I0123 07:50:46.692336 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d1c8e51-c730-4523-9c48-e8dbc626c95d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:50:47 crc kubenswrapper[4784]: I0123 07:50:47.166319 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d1c8e51-c730-4523-9c48-e8dbc626c95d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d1c8e51-c730-4523-9c48-e8dbc626c95d" (UID: "7d1c8e51-c730-4523-9c48-e8dbc626c95d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:50:47 crc kubenswrapper[4784]: I0123 07:50:47.223967 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d1c8e51-c730-4523-9c48-e8dbc626c95d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:50:47 crc kubenswrapper[4784]: I0123 07:50:47.269168 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lch54" Jan 23 07:50:47 crc kubenswrapper[4784]: I0123 07:50:47.337531 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lch54"] Jan 23 07:50:47 crc kubenswrapper[4784]: I0123 07:50:47.353218 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lch54"] Jan 23 07:50:47 crc kubenswrapper[4784]: I0123 07:50:47.701874 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:47 crc kubenswrapper[4784]: I0123 07:50:47.702468 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:47 crc kubenswrapper[4784]: I0123 07:50:47.779973 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:48 crc kubenswrapper[4784]: I0123 07:50:48.848900 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:49 crc kubenswrapper[4784]: I0123 07:50:49.269003 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d1c8e51-c730-4523-9c48-e8dbc626c95d" path="/var/lib/kubelet/pods/7d1c8e51-c730-4523-9c48-e8dbc626c95d/volumes" Jan 23 07:50:49 crc kubenswrapper[4784]: I0123 07:50:49.950519 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pvvr4"] Jan 23 07:50:51 crc kubenswrapper[4784]: I0123 07:50:51.300577 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pvvr4" podUID="36881164-593f-484a-977d-abca9962673b" containerName="registry-server" containerID="cri-o://f94a243beca07b83a7c41142a27ab1e59b093f44fabb56e5c1d35b9e1489f896" gracePeriod=2 Jan 23 07:50:51 crc kubenswrapper[4784]: I0123 07:50:51.858264 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:51 crc kubenswrapper[4784]: I0123 07:50:51.940466 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36881164-593f-484a-977d-abca9962673b-catalog-content\") pod \"36881164-593f-484a-977d-abca9962673b\" (UID: \"36881164-593f-484a-977d-abca9962673b\") " Jan 23 07:50:51 crc kubenswrapper[4784]: I0123 07:50:51.940775 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p7dx\" (UniqueName: \"kubernetes.io/projected/36881164-593f-484a-977d-abca9962673b-kube-api-access-7p7dx\") pod \"36881164-593f-484a-977d-abca9962673b\" (UID: \"36881164-593f-484a-977d-abca9962673b\") " Jan 23 07:50:51 crc kubenswrapper[4784]: I0123 07:50:51.940845 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36881164-593f-484a-977d-abca9962673b-utilities\") pod \"36881164-593f-484a-977d-abca9962673b\" (UID: \"36881164-593f-484a-977d-abca9962673b\") " Jan 23 07:50:51 crc kubenswrapper[4784]: I0123 07:50:51.941562 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36881164-593f-484a-977d-abca9962673b-utilities" (OuterVolumeSpecName: "utilities") pod "36881164-593f-484a-977d-abca9962673b" (UID: "36881164-593f-484a-977d-abca9962673b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:50:51 crc kubenswrapper[4784]: I0123 07:50:51.949077 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36881164-593f-484a-977d-abca9962673b-kube-api-access-7p7dx" (OuterVolumeSpecName: "kube-api-access-7p7dx") pod "36881164-593f-484a-977d-abca9962673b" (UID: "36881164-593f-484a-977d-abca9962673b"). InnerVolumeSpecName "kube-api-access-7p7dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:50:51 crc kubenswrapper[4784]: I0123 07:50:51.990640 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36881164-593f-484a-977d-abca9962673b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36881164-593f-484a-977d-abca9962673b" (UID: "36881164-593f-484a-977d-abca9962673b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.044285 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36881164-593f-484a-977d-abca9962673b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.044315 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36881164-593f-484a-977d-abca9962673b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.044327 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p7dx\" (UniqueName: \"kubernetes.io/projected/36881164-593f-484a-977d-abca9962673b-kube-api-access-7p7dx\") on node \"crc\" DevicePath \"\"" Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.322004 4784 generic.go:334] "Generic (PLEG): container finished" podID="36881164-593f-484a-977d-abca9962673b" containerID="f94a243beca07b83a7c41142a27ab1e59b093f44fabb56e5c1d35b9e1489f896" exitCode=0 Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.322069 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvvr4" Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.322094 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvvr4" event={"ID":"36881164-593f-484a-977d-abca9962673b","Type":"ContainerDied","Data":"f94a243beca07b83a7c41142a27ab1e59b093f44fabb56e5c1d35b9e1489f896"} Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.324252 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvvr4" event={"ID":"36881164-593f-484a-977d-abca9962673b","Type":"ContainerDied","Data":"158f7563a656f0b92f15ae1c4d75db551aba43f3ac5e3eeceac1f77e67bd83a8"} Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.324288 4784 scope.go:117] "RemoveContainer" containerID="f94a243beca07b83a7c41142a27ab1e59b093f44fabb56e5c1d35b9e1489f896" Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.358057 4784 scope.go:117] "RemoveContainer" containerID="90d445057cbf61bd58bfb188c6824365e60f6a90eafee5d9f0212f4b544b3209" Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.376884 4784 scope.go:117] "RemoveContainer" containerID="51b6ae54235e45155e28ba851059aa1fbca824dce25c124fe7377816fc6d1623" Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.395300 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pvvr4"] Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.409523 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pvvr4"] Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.448301 4784 scope.go:117] "RemoveContainer" containerID="f94a243beca07b83a7c41142a27ab1e59b093f44fabb56e5c1d35b9e1489f896" Jan 23 07:50:52 crc kubenswrapper[4784]: E0123 07:50:52.448718 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f94a243beca07b83a7c41142a27ab1e59b093f44fabb56e5c1d35b9e1489f896\": container with ID starting with f94a243beca07b83a7c41142a27ab1e59b093f44fabb56e5c1d35b9e1489f896 not found: ID does not exist" containerID="f94a243beca07b83a7c41142a27ab1e59b093f44fabb56e5c1d35b9e1489f896" Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.448793 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f94a243beca07b83a7c41142a27ab1e59b093f44fabb56e5c1d35b9e1489f896"} err="failed to get container status \"f94a243beca07b83a7c41142a27ab1e59b093f44fabb56e5c1d35b9e1489f896\": rpc error: code = NotFound desc = could not find container \"f94a243beca07b83a7c41142a27ab1e59b093f44fabb56e5c1d35b9e1489f896\": container with ID starting with f94a243beca07b83a7c41142a27ab1e59b093f44fabb56e5c1d35b9e1489f896 not found: ID does not exist" Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.448828 4784 scope.go:117] "RemoveContainer" containerID="90d445057cbf61bd58bfb188c6824365e60f6a90eafee5d9f0212f4b544b3209" Jan 23 07:50:52 crc kubenswrapper[4784]: E0123 07:50:52.449229 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90d445057cbf61bd58bfb188c6824365e60f6a90eafee5d9f0212f4b544b3209\": container with ID starting with 90d445057cbf61bd58bfb188c6824365e60f6a90eafee5d9f0212f4b544b3209 not found: ID does not exist" containerID="90d445057cbf61bd58bfb188c6824365e60f6a90eafee5d9f0212f4b544b3209" Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.449257 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90d445057cbf61bd58bfb188c6824365e60f6a90eafee5d9f0212f4b544b3209"} err="failed to get container status \"90d445057cbf61bd58bfb188c6824365e60f6a90eafee5d9f0212f4b544b3209\": rpc error: code = NotFound desc = could not find container \"90d445057cbf61bd58bfb188c6824365e60f6a90eafee5d9f0212f4b544b3209\": container with ID starting with 90d445057cbf61bd58bfb188c6824365e60f6a90eafee5d9f0212f4b544b3209 not found: ID does not exist" Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.449275 4784 scope.go:117] "RemoveContainer" containerID="51b6ae54235e45155e28ba851059aa1fbca824dce25c124fe7377816fc6d1623" Jan 23 07:50:52 crc kubenswrapper[4784]: E0123 07:50:52.449721 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51b6ae54235e45155e28ba851059aa1fbca824dce25c124fe7377816fc6d1623\": container with ID starting with 51b6ae54235e45155e28ba851059aa1fbca824dce25c124fe7377816fc6d1623 not found: ID does not exist" containerID="51b6ae54235e45155e28ba851059aa1fbca824dce25c124fe7377816fc6d1623" Jan 23 07:50:52 crc kubenswrapper[4784]: I0123 07:50:52.449773 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b6ae54235e45155e28ba851059aa1fbca824dce25c124fe7377816fc6d1623"} err="failed to get container status \"51b6ae54235e45155e28ba851059aa1fbca824dce25c124fe7377816fc6d1623\": rpc error: code = NotFound desc = could not find container \"51b6ae54235e45155e28ba851059aa1fbca824dce25c124fe7377816fc6d1623\": container with ID starting with 51b6ae54235e45155e28ba851059aa1fbca824dce25c124fe7377816fc6d1623 not found: ID does not exist" Jan 23 07:50:53 crc kubenswrapper[4784]: I0123 07:50:53.272109 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36881164-593f-484a-977d-abca9962673b" path="/var/lib/kubelet/pods/36881164-593f-484a-977d-abca9962673b/volumes" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.593627 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xllvc"] Jan 23 07:51:54 crc kubenswrapper[4784]: E0123 07:51:54.594887 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36881164-593f-484a-977d-abca9962673b" containerName="registry-server" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.594910 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="36881164-593f-484a-977d-abca9962673b" containerName="registry-server" Jan 23 07:51:54 crc kubenswrapper[4784]: E0123 07:51:54.594941 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36881164-593f-484a-977d-abca9962673b" containerName="extract-utilities" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.594956 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="36881164-593f-484a-977d-abca9962673b" containerName="extract-utilities" Jan 23 07:51:54 crc kubenswrapper[4784]: E0123 07:51:54.594991 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d1c8e51-c730-4523-9c48-e8dbc626c95d" containerName="extract-utilities" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.595005 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1c8e51-c730-4523-9c48-e8dbc626c95d" containerName="extract-utilities" Jan 23 07:51:54 crc kubenswrapper[4784]: E0123 07:51:54.595084 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d1c8e51-c730-4523-9c48-e8dbc626c95d" containerName="extract-content" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.595099 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1c8e51-c730-4523-9c48-e8dbc626c95d" containerName="extract-content" Jan 23 07:51:54 crc kubenswrapper[4784]: E0123 07:51:54.595129 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d1c8e51-c730-4523-9c48-e8dbc626c95d" containerName="registry-server" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.595143 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1c8e51-c730-4523-9c48-e8dbc626c95d" containerName="registry-server" Jan 23 07:51:54 crc kubenswrapper[4784]: E0123 07:51:54.595164 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36881164-593f-484a-977d-abca9962673b" containerName="extract-content" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.595176 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="36881164-593f-484a-977d-abca9962673b" containerName="extract-content" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.595525 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d1c8e51-c730-4523-9c48-e8dbc626c95d" containerName="registry-server" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.595562 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="36881164-593f-484a-977d-abca9962673b" containerName="registry-server" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.598272 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.615398 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xllvc"] Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.758337 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96876103-f424-4c23-87c7-9c786e151a45-catalog-content\") pod \"redhat-operators-xllvc\" (UID: \"96876103-f424-4c23-87c7-9c786e151a45\") " pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.758592 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96876103-f424-4c23-87c7-9c786e151a45-utilities\") pod \"redhat-operators-xllvc\" (UID: \"96876103-f424-4c23-87c7-9c786e151a45\") " pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.758711 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxmrr\" (UniqueName: \"kubernetes.io/projected/96876103-f424-4c23-87c7-9c786e151a45-kube-api-access-xxmrr\") pod \"redhat-operators-xllvc\" (UID: \"96876103-f424-4c23-87c7-9c786e151a45\") " pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.861004 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96876103-f424-4c23-87c7-9c786e151a45-utilities\") pod \"redhat-operators-xllvc\" (UID: \"96876103-f424-4c23-87c7-9c786e151a45\") " pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.861091 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxmrr\" (UniqueName: \"kubernetes.io/projected/96876103-f424-4c23-87c7-9c786e151a45-kube-api-access-xxmrr\") pod \"redhat-operators-xllvc\" (UID: \"96876103-f424-4c23-87c7-9c786e151a45\") " pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.861162 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96876103-f424-4c23-87c7-9c786e151a45-catalog-content\") pod \"redhat-operators-xllvc\" (UID: \"96876103-f424-4c23-87c7-9c786e151a45\") " pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.861551 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96876103-f424-4c23-87c7-9c786e151a45-utilities\") pod \"redhat-operators-xllvc\" (UID: \"96876103-f424-4c23-87c7-9c786e151a45\") " pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.861622 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96876103-f424-4c23-87c7-9c786e151a45-catalog-content\") pod \"redhat-operators-xllvc\" (UID: \"96876103-f424-4c23-87c7-9c786e151a45\") " pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.907629 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxmrr\" (UniqueName: \"kubernetes.io/projected/96876103-f424-4c23-87c7-9c786e151a45-kube-api-access-xxmrr\") pod \"redhat-operators-xllvc\" (UID: \"96876103-f424-4c23-87c7-9c786e151a45\") " pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:51:54 crc kubenswrapper[4784]: I0123 07:51:54.928197 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:51:55 crc kubenswrapper[4784]: W0123 07:51:55.457909 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96876103_f424_4c23_87c7_9c786e151a45.slice/crio-d67740f319f71276852736e421c5a7035c405159a70eb772b4d6ceefa3ef6b0e WatchSource:0}: Error finding container d67740f319f71276852736e421c5a7035c405159a70eb772b4d6ceefa3ef6b0e: Status 404 returned error can't find the container with id d67740f319f71276852736e421c5a7035c405159a70eb772b4d6ceefa3ef6b0e Jan 23 07:51:55 crc kubenswrapper[4784]: I0123 07:51:55.471109 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xllvc"] Jan 23 07:51:56 crc kubenswrapper[4784]: I0123 07:51:56.031608 4784 generic.go:334] "Generic (PLEG): container finished" podID="96876103-f424-4c23-87c7-9c786e151a45" containerID="bce1e29cd455b80a73b64edcfb5879f6abd1bc70ee0d6388f8ec6d66a1acfbae" exitCode=0 Jan 23 07:51:56 crc kubenswrapper[4784]: I0123 07:51:56.031722 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xllvc" event={"ID":"96876103-f424-4c23-87c7-9c786e151a45","Type":"ContainerDied","Data":"bce1e29cd455b80a73b64edcfb5879f6abd1bc70ee0d6388f8ec6d66a1acfbae"} Jan 23 07:51:56 crc kubenswrapper[4784]: I0123 07:51:56.032044 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xllvc" event={"ID":"96876103-f424-4c23-87c7-9c786e151a45","Type":"ContainerStarted","Data":"d67740f319f71276852736e421c5a7035c405159a70eb772b4d6ceefa3ef6b0e"} Jan 23 07:51:57 crc kubenswrapper[4784]: I0123 07:51:57.045980 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xllvc" event={"ID":"96876103-f424-4c23-87c7-9c786e151a45","Type":"ContainerStarted","Data":"bef80bf50f75a9add322012535acd1382056f3ba73b12fbed4dbfe8acd26eb6f"} Jan 23 07:51:58 crc kubenswrapper[4784]: I0123 07:51:58.059766 4784 generic.go:334] "Generic (PLEG): container finished" podID="96876103-f424-4c23-87c7-9c786e151a45" containerID="bef80bf50f75a9add322012535acd1382056f3ba73b12fbed4dbfe8acd26eb6f" exitCode=0 Jan 23 07:51:58 crc kubenswrapper[4784]: I0123 07:51:58.059822 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xllvc" event={"ID":"96876103-f424-4c23-87c7-9c786e151a45","Type":"ContainerDied","Data":"bef80bf50f75a9add322012535acd1382056f3ba73b12fbed4dbfe8acd26eb6f"} Jan 23 07:52:00 crc kubenswrapper[4784]: I0123 07:52:00.107165 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xllvc" event={"ID":"96876103-f424-4c23-87c7-9c786e151a45","Type":"ContainerStarted","Data":"96c59f85d872ab84907d350744e7f04420c9c0ca9f80fee3d539b3f843700a88"} Jan 23 07:52:00 crc kubenswrapper[4784]: I0123 07:52:00.135653 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xllvc" podStartSLOduration=3.271677317 podStartE2EDuration="6.135590587s" podCreationTimestamp="2026-01-23 07:51:54 +0000 UTC" firstStartedPulling="2026-01-23 07:51:56.033276246 +0000 UTC m=+5519.265784220" lastFinishedPulling="2026-01-23 07:51:58.897189516 +0000 UTC m=+5522.129697490" observedRunningTime="2026-01-23 07:52:00.129855936 +0000 UTC m=+5523.362363960" watchObservedRunningTime="2026-01-23 07:52:00.135590587 +0000 UTC m=+5523.368098591" Jan 23 07:52:04 crc kubenswrapper[4784]: I0123 07:52:04.929809 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:52:04 crc kubenswrapper[4784]: I0123 07:52:04.930528 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:52:05 crc kubenswrapper[4784]: I0123 07:52:05.996331 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xllvc" podUID="96876103-f424-4c23-87c7-9c786e151a45" containerName="registry-server" probeResult="failure" output=< Jan 23 07:52:05 crc kubenswrapper[4784]: timeout: failed to connect service ":50051" within 1s Jan 23 07:52:05 crc kubenswrapper[4784]: > Jan 23 07:52:15 crc kubenswrapper[4784]: I0123 07:52:15.015569 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:52:15 crc kubenswrapper[4784]: I0123 07:52:15.099042 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:52:15 crc kubenswrapper[4784]: I0123 07:52:15.272710 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xllvc"] Jan 23 07:52:16 crc kubenswrapper[4784]: I0123 07:52:16.271213 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xllvc" podUID="96876103-f424-4c23-87c7-9c786e151a45" containerName="registry-server" containerID="cri-o://96c59f85d872ab84907d350744e7f04420c9c0ca9f80fee3d539b3f843700a88" gracePeriod=2 Jan 23 07:52:16 crc kubenswrapper[4784]: I0123 07:52:16.804639 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:52:16 crc kubenswrapper[4784]: I0123 07:52:16.980503 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96876103-f424-4c23-87c7-9c786e151a45-catalog-content\") pod \"96876103-f424-4c23-87c7-9c786e151a45\" (UID: \"96876103-f424-4c23-87c7-9c786e151a45\") " Jan 23 07:52:16 crc kubenswrapper[4784]: I0123 07:52:16.980627 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxmrr\" (UniqueName: \"kubernetes.io/projected/96876103-f424-4c23-87c7-9c786e151a45-kube-api-access-xxmrr\") pod \"96876103-f424-4c23-87c7-9c786e151a45\" (UID: \"96876103-f424-4c23-87c7-9c786e151a45\") " Jan 23 07:52:16 crc kubenswrapper[4784]: I0123 07:52:16.980700 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96876103-f424-4c23-87c7-9c786e151a45-utilities\") pod \"96876103-f424-4c23-87c7-9c786e151a45\" (UID: \"96876103-f424-4c23-87c7-9c786e151a45\") " Jan 23 07:52:16 crc kubenswrapper[4784]: I0123 07:52:16.981635 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96876103-f424-4c23-87c7-9c786e151a45-utilities" (OuterVolumeSpecName: "utilities") pod "96876103-f424-4c23-87c7-9c786e151a45" (UID: "96876103-f424-4c23-87c7-9c786e151a45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:52:16 crc kubenswrapper[4784]: I0123 07:52:16.991345 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96876103-f424-4c23-87c7-9c786e151a45-kube-api-access-xxmrr" (OuterVolumeSpecName: "kube-api-access-xxmrr") pod "96876103-f424-4c23-87c7-9c786e151a45" (UID: "96876103-f424-4c23-87c7-9c786e151a45"). InnerVolumeSpecName "kube-api-access-xxmrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.083281 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxmrr\" (UniqueName: \"kubernetes.io/projected/96876103-f424-4c23-87c7-9c786e151a45-kube-api-access-xxmrr\") on node \"crc\" DevicePath \"\"" Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.083322 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96876103-f424-4c23-87c7-9c786e151a45-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.097514 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96876103-f424-4c23-87c7-9c786e151a45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96876103-f424-4c23-87c7-9c786e151a45" (UID: "96876103-f424-4c23-87c7-9c786e151a45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.185888 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96876103-f424-4c23-87c7-9c786e151a45-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.290052 4784 generic.go:334] "Generic (PLEG): container finished" podID="96876103-f424-4c23-87c7-9c786e151a45" containerID="96c59f85d872ab84907d350744e7f04420c9c0ca9f80fee3d539b3f843700a88" exitCode=0 Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.290107 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xllvc" event={"ID":"96876103-f424-4c23-87c7-9c786e151a45","Type":"ContainerDied","Data":"96c59f85d872ab84907d350744e7f04420c9c0ca9f80fee3d539b3f843700a88"} Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.290139 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xllvc" Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.290354 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xllvc" event={"ID":"96876103-f424-4c23-87c7-9c786e151a45","Type":"ContainerDied","Data":"d67740f319f71276852736e421c5a7035c405159a70eb772b4d6ceefa3ef6b0e"} Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.290388 4784 scope.go:117] "RemoveContainer" containerID="96c59f85d872ab84907d350744e7f04420c9c0ca9f80fee3d539b3f843700a88" Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.328673 4784 scope.go:117] "RemoveContainer" containerID="bef80bf50f75a9add322012535acd1382056f3ba73b12fbed4dbfe8acd26eb6f" Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.350478 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xllvc"] Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.360737 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xllvc"] Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.373484 4784 scope.go:117] "RemoveContainer" containerID="bce1e29cd455b80a73b64edcfb5879f6abd1bc70ee0d6388f8ec6d66a1acfbae" Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.410814 4784 scope.go:117] "RemoveContainer" containerID="96c59f85d872ab84907d350744e7f04420c9c0ca9f80fee3d539b3f843700a88" Jan 23 07:52:17 crc kubenswrapper[4784]: E0123 07:52:17.411781 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c59f85d872ab84907d350744e7f04420c9c0ca9f80fee3d539b3f843700a88\": container with ID starting with 96c59f85d872ab84907d350744e7f04420c9c0ca9f80fee3d539b3f843700a88 not found: ID does not exist" containerID="96c59f85d872ab84907d350744e7f04420c9c0ca9f80fee3d539b3f843700a88" Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.411818 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c59f85d872ab84907d350744e7f04420c9c0ca9f80fee3d539b3f843700a88"} err="failed to get container status \"96c59f85d872ab84907d350744e7f04420c9c0ca9f80fee3d539b3f843700a88\": rpc error: code = NotFound desc = could not find container \"96c59f85d872ab84907d350744e7f04420c9c0ca9f80fee3d539b3f843700a88\": container with ID starting with 96c59f85d872ab84907d350744e7f04420c9c0ca9f80fee3d539b3f843700a88 not found: ID does not exist" Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.411843 4784 scope.go:117] "RemoveContainer" containerID="bef80bf50f75a9add322012535acd1382056f3ba73b12fbed4dbfe8acd26eb6f" Jan 23 07:52:17 crc kubenswrapper[4784]: E0123 07:52:17.412159 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bef80bf50f75a9add322012535acd1382056f3ba73b12fbed4dbfe8acd26eb6f\": container with ID starting with bef80bf50f75a9add322012535acd1382056f3ba73b12fbed4dbfe8acd26eb6f not found: ID does not exist" containerID="bef80bf50f75a9add322012535acd1382056f3ba73b12fbed4dbfe8acd26eb6f" Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.412220 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef80bf50f75a9add322012535acd1382056f3ba73b12fbed4dbfe8acd26eb6f"} err="failed to get container status \"bef80bf50f75a9add322012535acd1382056f3ba73b12fbed4dbfe8acd26eb6f\": rpc error: code = NotFound desc = could not find container \"bef80bf50f75a9add322012535acd1382056f3ba73b12fbed4dbfe8acd26eb6f\": container with ID starting with bef80bf50f75a9add322012535acd1382056f3ba73b12fbed4dbfe8acd26eb6f not found: ID does not exist" Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.412261 4784 scope.go:117] "RemoveContainer" containerID="bce1e29cd455b80a73b64edcfb5879f6abd1bc70ee0d6388f8ec6d66a1acfbae" Jan 23 07:52:17 crc kubenswrapper[4784]: E0123 07:52:17.412554 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bce1e29cd455b80a73b64edcfb5879f6abd1bc70ee0d6388f8ec6d66a1acfbae\": container with ID starting with bce1e29cd455b80a73b64edcfb5879f6abd1bc70ee0d6388f8ec6d66a1acfbae not found: ID does not exist" containerID="bce1e29cd455b80a73b64edcfb5879f6abd1bc70ee0d6388f8ec6d66a1acfbae" Jan 23 07:52:17 crc kubenswrapper[4784]: I0123 07:52:17.412581 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bce1e29cd455b80a73b64edcfb5879f6abd1bc70ee0d6388f8ec6d66a1acfbae"} err="failed to get container status \"bce1e29cd455b80a73b64edcfb5879f6abd1bc70ee0d6388f8ec6d66a1acfbae\": rpc error: code = NotFound desc = could not find container \"bce1e29cd455b80a73b64edcfb5879f6abd1bc70ee0d6388f8ec6d66a1acfbae\": container with ID starting with bce1e29cd455b80a73b64edcfb5879f6abd1bc70ee0d6388f8ec6d66a1acfbae not found: ID does not exist" Jan 23 07:52:19 crc kubenswrapper[4784]: I0123 07:52:19.276485 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96876103-f424-4c23-87c7-9c786e151a45" path="/var/lib/kubelet/pods/96876103-f424-4c23-87c7-9c786e151a45/volumes" Jan 23 07:52:53 crc kubenswrapper[4784]: I0123 07:52:53.602709 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:52:53 crc kubenswrapper[4784]: I0123 07:52:53.603374 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:53:23 crc kubenswrapper[4784]: I0123 07:53:23.603237 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:53:23 crc kubenswrapper[4784]: I0123 07:53:23.603874 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:53:53 crc kubenswrapper[4784]: I0123 07:53:53.603258 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:53:53 crc kubenswrapper[4784]: I0123 07:53:53.603925 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:53:53 crc kubenswrapper[4784]: I0123 07:53:53.604016 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 07:53:53 crc kubenswrapper[4784]: I0123 07:53:53.604812 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a4925ef8f367f766d6f89868babd919efb98fcf740c1549570297eb34a4c036"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:53:53 crc kubenswrapper[4784]: I0123 07:53:53.604910 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://8a4925ef8f367f766d6f89868babd919efb98fcf740c1549570297eb34a4c036" gracePeriod=600 Jan 23 07:53:54 crc kubenswrapper[4784]: I0123 07:53:54.512465 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="8a4925ef8f367f766d6f89868babd919efb98fcf740c1549570297eb34a4c036" exitCode=0 Jan 23 07:53:54 crc kubenswrapper[4784]: I0123 07:53:54.512539 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"8a4925ef8f367f766d6f89868babd919efb98fcf740c1549570297eb34a4c036"} Jan 23 07:53:54 crc kubenswrapper[4784]: I0123 07:53:54.512989 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f"} Jan 23 07:53:54 crc kubenswrapper[4784]: I0123 07:53:54.513037 4784 scope.go:117] "RemoveContainer" containerID="a229bda558586f244fcb5b2f1644148098794da7e6912677172f13f15cd46f65" Jan 23 07:55:53 crc kubenswrapper[4784]: I0123 07:55:53.603292 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:55:53 crc kubenswrapper[4784]: I0123 07:55:53.604896 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:56:23 crc kubenswrapper[4784]: I0123 07:56:23.603158 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:56:23 crc kubenswrapper[4784]: I0123 07:56:23.603712 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:56:42 crc kubenswrapper[4784]: I0123 07:56:42.701867 4784 scope.go:117] "RemoveContainer" containerID="b155068579279bfcaec733e5b8bc301802454ea3ce8e262262b77803c1bbc819" Jan 23 07:56:42 crc kubenswrapper[4784]: I0123 07:56:42.752318 4784 scope.go:117] "RemoveContainer" containerID="caf1fcbd1f0620ea6104df2d9ace749fee7e8eb1783d456009a3cf737147d0c2" Jan 23 07:56:42 crc kubenswrapper[4784]: I0123 07:56:42.786873 4784 scope.go:117] "RemoveContainer" containerID="c6c645ef245e688b56da8b0c44fa4b7462a6fdb1c37c7305bc3f19a1d0d4d97b" Jan 23 07:56:50 crc kubenswrapper[4784]: I0123 07:56:50.622431 4784 generic.go:334] "Generic (PLEG): container finished" podID="a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc" containerID="5c4cf8ba40a9fe304a4ed243b096e14daedc5ba932db30e5d9c5e1f290b9ec9c" exitCode=1 Jan 23 07:56:50 crc kubenswrapper[4784]: I0123 07:56:50.622535 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc","Type":"ContainerDied","Data":"5c4cf8ba40a9fe304a4ed243b096e14daedc5ba932db30e5d9c5e1f290b9ec9c"} Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.069331 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.253902 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-ca-certs\") pod \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.253979 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-openstack-config-secret\") pod \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.254039 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.254073 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnqg8\" (UniqueName: \"kubernetes.io/projected/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-kube-api-access-rnqg8\") pod \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.254154 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-test-operator-ephemeral-workdir\") pod \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.254256 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-openstack-config\") pod \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.254311 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-test-operator-ephemeral-temporary\") pod \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.254389 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-ssh-key\") pod \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.254443 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-config-data\") pod \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\" (UID: \"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc\") " Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.255217 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-config-data" (OuterVolumeSpecName: "config-data") pod "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc" (UID: "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.255247 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc" (UID: "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.334638 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc" (UID: "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.357887 4784 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.357944 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.357964 4784 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.647570 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc","Type":"ContainerDied","Data":"426b0f9d789d8ea2e21f84db9b42e631b7d2e1f242c74785a05507789d5a4968"} Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.647605 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="426b0f9d789d8ea2e21f84db9b42e631b7d2e1f242c74785a05507789d5a4968" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.647662 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.870936 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-kube-api-access-rnqg8" (OuterVolumeSpecName: "kube-api-access-rnqg8") pod "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc" (UID: "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc"). InnerVolumeSpecName "kube-api-access-rnqg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.874593 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc" (UID: "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.913429 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc" (UID: "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.942796 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc" (UID: "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.959995 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc" (UID: "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.971135 4784 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.971164 4784 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.971188 4784 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.971198 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnqg8\" (UniqueName: \"kubernetes.io/projected/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-kube-api-access-rnqg8\") on node \"crc\" DevicePath \"\"" Jan 23 07:56:52 crc kubenswrapper[4784]: I0123 07:56:52.971208 4784 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 23 07:56:53 crc kubenswrapper[4784]: I0123 07:56:53.086508 4784 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 23 07:56:53 crc kubenswrapper[4784]: I0123 07:56:53.093747 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc" (UID: "a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:56:53 crc kubenswrapper[4784]: I0123 07:56:53.175010 4784 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:56:53 crc kubenswrapper[4784]: I0123 07:56:53.175045 4784 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 23 07:56:53 crc kubenswrapper[4784]: I0123 07:56:53.603571 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:56:53 crc kubenswrapper[4784]: I0123 07:56:53.603928 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:56:53 crc kubenswrapper[4784]: I0123 07:56:53.603982 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 07:56:53 crc kubenswrapper[4784]: I0123 07:56:53.604830 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:56:53 crc kubenswrapper[4784]: I0123 07:56:53.604905 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" gracePeriod=600 Jan 23 07:56:53 crc kubenswrapper[4784]: E0123 07:56:53.752336 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:56:54 crc kubenswrapper[4784]: I0123 07:56:54.679228 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" exitCode=0 Jan 23 07:56:54 crc kubenswrapper[4784]: I0123 07:56:54.679294 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f"} Jan 23 07:56:54 crc kubenswrapper[4784]: I0123 07:56:54.679358 4784 scope.go:117] "RemoveContainer" containerID="8a4925ef8f367f766d6f89868babd919efb98fcf740c1549570297eb34a4c036" Jan 23 07:56:54 crc kubenswrapper[4784]: I0123 07:56:54.680529 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 07:56:54 crc kubenswrapper[4784]: E0123 07:56:54.681279 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.102830 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 07:56:56 crc kubenswrapper[4784]: E0123 07:56:56.103843 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96876103-f424-4c23-87c7-9c786e151a45" containerName="extract-utilities" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.103892 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="96876103-f424-4c23-87c7-9c786e151a45" containerName="extract-utilities" Jan 23 07:56:56 crc kubenswrapper[4784]: E0123 07:56:56.103921 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96876103-f424-4c23-87c7-9c786e151a45" containerName="registry-server" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.103933 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="96876103-f424-4c23-87c7-9c786e151a45" containerName="registry-server" Jan 23 07:56:56 crc kubenswrapper[4784]: E0123 07:56:56.103956 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96876103-f424-4c23-87c7-9c786e151a45" containerName="extract-content" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.103968 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="96876103-f424-4c23-87c7-9c786e151a45" containerName="extract-content" Jan 23 07:56:56 crc kubenswrapper[4784]: E0123 07:56:56.103993 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc" containerName="tempest-tests-tempest-tests-runner" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.104006 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc" containerName="tempest-tests-tempest-tests-runner" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.104371 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="96876103-f424-4c23-87c7-9c786e151a45" containerName="registry-server" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.104435 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc" containerName="tempest-tests-tempest-tests-runner" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.105470 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.110652 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rmvxh" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.120333 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.244765 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97jq\" (UniqueName: \"kubernetes.io/projected/19d1621c-2a08-4b32-8039-f3ba8d4ea222-kube-api-access-f97jq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"19d1621c-2a08-4b32-8039-f3ba8d4ea222\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.245010 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"19d1621c-2a08-4b32-8039-f3ba8d4ea222\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.347747 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f97jq\" (UniqueName: \"kubernetes.io/projected/19d1621c-2a08-4b32-8039-f3ba8d4ea222-kube-api-access-f97jq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"19d1621c-2a08-4b32-8039-f3ba8d4ea222\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.348908 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"19d1621c-2a08-4b32-8039-f3ba8d4ea222\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.349154 4784 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"19d1621c-2a08-4b32-8039-f3ba8d4ea222\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.374734 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f97jq\" (UniqueName: \"kubernetes.io/projected/19d1621c-2a08-4b32-8039-f3ba8d4ea222-kube-api-access-f97jq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"19d1621c-2a08-4b32-8039-f3ba8d4ea222\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.376743 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"19d1621c-2a08-4b32-8039-f3ba8d4ea222\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.461952 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.982651 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:56:56 crc kubenswrapper[4784]: I0123 07:56:56.983042 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 07:56:57 crc kubenswrapper[4784]: I0123 07:56:57.734221 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"19d1621c-2a08-4b32-8039-f3ba8d4ea222","Type":"ContainerStarted","Data":"0c54d3e7fbb46f4e6d82963449429e0b0b5ff323e9a8fc62933edffdedce5478"} Jan 23 07:56:58 crc kubenswrapper[4784]: I0123 07:56:58.749860 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"19d1621c-2a08-4b32-8039-f3ba8d4ea222","Type":"ContainerStarted","Data":"8e6dbe3dd134f4e21f481c37f2f6c246e975d2ba2aa1f1499cf41216327336cc"} Jan 23 07:56:58 crc kubenswrapper[4784]: I0123 07:56:58.770906 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.8669258850000001 podStartE2EDuration="2.770887463s" podCreationTimestamp="2026-01-23 07:56:56 +0000 UTC" firstStartedPulling="2026-01-23 07:56:56.982260381 +0000 UTC m=+5820.214768365" lastFinishedPulling="2026-01-23 07:56:57.886221959 +0000 UTC m=+5821.118729943" observedRunningTime="2026-01-23 07:56:58.764972307 +0000 UTC m=+5821.997480321" watchObservedRunningTime="2026-01-23 07:56:58.770887463 +0000 UTC m=+5822.003395447" Jan 23 07:57:08 crc kubenswrapper[4784]: I0123 07:57:08.254008 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 07:57:08 crc kubenswrapper[4784]: E0123 07:57:08.254846 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:57:22 crc kubenswrapper[4784]: I0123 07:57:22.254662 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 07:57:22 crc kubenswrapper[4784]: E0123 07:57:22.263482 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:57:36 crc kubenswrapper[4784]: I0123 07:57:36.254484 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 07:57:36 crc kubenswrapper[4784]: E0123 07:57:36.255541 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:57:37 crc kubenswrapper[4784]: I0123 07:57:37.093912 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nthhr/must-gather-9jtnb"] Jan 23 07:57:37 crc kubenswrapper[4784]: I0123 07:57:37.095851 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/must-gather-9jtnb" Jan 23 07:57:37 crc kubenswrapper[4784]: I0123 07:57:37.097569 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-nthhr"/"default-dockercfg-9dxs6" Jan 23 07:57:37 crc kubenswrapper[4784]: I0123 07:57:37.098309 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nthhr"/"openshift-service-ca.crt" Jan 23 07:57:37 crc kubenswrapper[4784]: I0123 07:57:37.098633 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nthhr"/"kube-root-ca.crt" Jan 23 07:57:37 crc kubenswrapper[4784]: I0123 07:57:37.103814 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nthhr/must-gather-9jtnb"] Jan 23 07:57:37 crc kubenswrapper[4784]: I0123 07:57:37.135464 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxll6\" (UniqueName: \"kubernetes.io/projected/21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3-kube-api-access-bxll6\") pod \"must-gather-9jtnb\" (UID: \"21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3\") " pod="openshift-must-gather-nthhr/must-gather-9jtnb" Jan 23 07:57:37 crc kubenswrapper[4784]: I0123 07:57:37.135872 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3-must-gather-output\") pod \"must-gather-9jtnb\" (UID: \"21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3\") " pod="openshift-must-gather-nthhr/must-gather-9jtnb" Jan 23 07:57:37 crc kubenswrapper[4784]: I0123 07:57:37.238401 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxll6\" (UniqueName: \"kubernetes.io/projected/21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3-kube-api-access-bxll6\") pod \"must-gather-9jtnb\" (UID: \"21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3\") " pod="openshift-must-gather-nthhr/must-gather-9jtnb" Jan 23 07:57:37 crc kubenswrapper[4784]: I0123 07:57:37.238871 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3-must-gather-output\") pod \"must-gather-9jtnb\" (UID: \"21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3\") " pod="openshift-must-gather-nthhr/must-gather-9jtnb" Jan 23 07:57:37 crc kubenswrapper[4784]: I0123 07:57:37.239267 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3-must-gather-output\") pod \"must-gather-9jtnb\" (UID: \"21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3\") " pod="openshift-must-gather-nthhr/must-gather-9jtnb" Jan 23 07:57:37 crc kubenswrapper[4784]: I0123 07:57:37.256477 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxll6\" (UniqueName: \"kubernetes.io/projected/21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3-kube-api-access-bxll6\") pod \"must-gather-9jtnb\" (UID: \"21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3\") " pod="openshift-must-gather-nthhr/must-gather-9jtnb" Jan 23 07:57:37 crc kubenswrapper[4784]: I0123 07:57:37.447479 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/must-gather-9jtnb" Jan 23 07:57:37 crc kubenswrapper[4784]: I0123 07:57:37.946834 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nthhr/must-gather-9jtnb"] Jan 23 07:57:38 crc kubenswrapper[4784]: I0123 07:57:38.229979 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nthhr/must-gather-9jtnb" event={"ID":"21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3","Type":"ContainerStarted","Data":"cc657a751b479e507f71cdcb56c868ce45e49190d80267e2966997e2f4e15884"} Jan 23 07:57:44 crc kubenswrapper[4784]: I0123 07:57:44.297646 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nthhr/must-gather-9jtnb" event={"ID":"21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3","Type":"ContainerStarted","Data":"bef4bc88dd5e13095397e005a8a2c4b203eab1cfb56107d133653d463066bd02"} Jan 23 07:57:45 crc kubenswrapper[4784]: I0123 07:57:45.313424 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nthhr/must-gather-9jtnb" event={"ID":"21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3","Type":"ContainerStarted","Data":"deb4e970495a51618120a86314fdde209b0ecab509452cc692d28e9c78f0f66a"} Jan 23 07:57:45 crc kubenswrapper[4784]: I0123 07:57:45.355417 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nthhr/must-gather-9jtnb" podStartSLOduration=2.344817344 podStartE2EDuration="8.355387032s" podCreationTimestamp="2026-01-23 07:57:37 +0000 UTC" firstStartedPulling="2026-01-23 07:57:37.960311952 +0000 UTC m=+5861.192819936" lastFinishedPulling="2026-01-23 07:57:43.97088165 +0000 UTC m=+5867.203389624" observedRunningTime="2026-01-23 07:57:45.347224985 +0000 UTC m=+5868.579732999" watchObservedRunningTime="2026-01-23 07:57:45.355387032 +0000 UTC m=+5868.587895006" Jan 23 07:57:48 crc kubenswrapper[4784]: I0123 07:57:48.303276 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nthhr/crc-debug-94gwf"] Jan 23 07:57:48 crc kubenswrapper[4784]: I0123 07:57:48.305316 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/crc-debug-94gwf" Jan 23 07:57:48 crc kubenswrapper[4784]: I0123 07:57:48.406312 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a300cb75-e076-419b-83d0-2709accd9f17-host\") pod \"crc-debug-94gwf\" (UID: \"a300cb75-e076-419b-83d0-2709accd9f17\") " pod="openshift-must-gather-nthhr/crc-debug-94gwf" Jan 23 07:57:48 crc kubenswrapper[4784]: I0123 07:57:48.406376 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtdgr\" (UniqueName: \"kubernetes.io/projected/a300cb75-e076-419b-83d0-2709accd9f17-kube-api-access-rtdgr\") pod \"crc-debug-94gwf\" (UID: \"a300cb75-e076-419b-83d0-2709accd9f17\") " pod="openshift-must-gather-nthhr/crc-debug-94gwf" Jan 23 07:57:48 crc kubenswrapper[4784]: I0123 07:57:48.508461 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtdgr\" (UniqueName: \"kubernetes.io/projected/a300cb75-e076-419b-83d0-2709accd9f17-kube-api-access-rtdgr\") pod \"crc-debug-94gwf\" (UID: \"a300cb75-e076-419b-83d0-2709accd9f17\") " pod="openshift-must-gather-nthhr/crc-debug-94gwf" Jan 23 07:57:48 crc kubenswrapper[4784]: I0123 07:57:48.508693 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a300cb75-e076-419b-83d0-2709accd9f17-host\") pod \"crc-debug-94gwf\" (UID: \"a300cb75-e076-419b-83d0-2709accd9f17\") " pod="openshift-must-gather-nthhr/crc-debug-94gwf" Jan 23 07:57:48 crc kubenswrapper[4784]: I0123 07:57:48.508807 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a300cb75-e076-419b-83d0-2709accd9f17-host\") pod \"crc-debug-94gwf\" (UID: \"a300cb75-e076-419b-83d0-2709accd9f17\") " pod="openshift-must-gather-nthhr/crc-debug-94gwf" Jan 23 07:57:48 crc kubenswrapper[4784]: I0123 07:57:48.531801 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtdgr\" (UniqueName: \"kubernetes.io/projected/a300cb75-e076-419b-83d0-2709accd9f17-kube-api-access-rtdgr\") pod \"crc-debug-94gwf\" (UID: \"a300cb75-e076-419b-83d0-2709accd9f17\") " pod="openshift-must-gather-nthhr/crc-debug-94gwf" Jan 23 07:57:48 crc kubenswrapper[4784]: I0123 07:57:48.625566 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/crc-debug-94gwf" Jan 23 07:57:49 crc kubenswrapper[4784]: I0123 07:57:49.353625 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nthhr/crc-debug-94gwf" event={"ID":"a300cb75-e076-419b-83d0-2709accd9f17","Type":"ContainerStarted","Data":"f0b74bc81a211e8967d5fae5d963b264a068232fd57641232cc18343a01b83d7"} Jan 23 07:57:50 crc kubenswrapper[4784]: I0123 07:57:50.254013 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 07:57:50 crc kubenswrapper[4784]: E0123 07:57:50.254552 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:58:00 crc kubenswrapper[4784]: I0123 07:58:00.476658 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nthhr/crc-debug-94gwf" event={"ID":"a300cb75-e076-419b-83d0-2709accd9f17","Type":"ContainerStarted","Data":"7912bc6e15cab812b8efee1a8962bcc6421ee0851bf394329730fc4080dec49c"} Jan 23 07:58:00 crc kubenswrapper[4784]: I0123 07:58:00.502708 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nthhr/crc-debug-94gwf" podStartSLOduration=1.045782502 podStartE2EDuration="12.502684275s" podCreationTimestamp="2026-01-23 07:57:48 +0000 UTC" firstStartedPulling="2026-01-23 07:57:48.683902881 +0000 UTC m=+5871.916410855" lastFinishedPulling="2026-01-23 07:58:00.140804654 +0000 UTC m=+5883.373312628" observedRunningTime="2026-01-23 07:58:00.492174195 +0000 UTC m=+5883.724682169" watchObservedRunningTime="2026-01-23 07:58:00.502684275 +0000 UTC m=+5883.735192279" Jan 23 07:58:03 crc kubenswrapper[4784]: I0123 07:58:03.254087 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 07:58:03 crc kubenswrapper[4784]: E0123 07:58:03.255039 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:58:15 crc kubenswrapper[4784]: I0123 07:58:15.254464 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 07:58:15 crc kubenswrapper[4784]: E0123 07:58:15.255287 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:58:26 crc kubenswrapper[4784]: I0123 07:58:26.255566 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 07:58:26 crc kubenswrapper[4784]: E0123 07:58:26.256920 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:58:38 crc kubenswrapper[4784]: I0123 07:58:38.254337 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 07:58:38 crc kubenswrapper[4784]: E0123 07:58:38.255264 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:58:47 crc kubenswrapper[4784]: I0123 07:58:47.022095 4784 generic.go:334] "Generic (PLEG): container finished" podID="a300cb75-e076-419b-83d0-2709accd9f17" containerID="7912bc6e15cab812b8efee1a8962bcc6421ee0851bf394329730fc4080dec49c" exitCode=0 Jan 23 07:58:47 crc kubenswrapper[4784]: I0123 07:58:47.022293 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nthhr/crc-debug-94gwf" event={"ID":"a300cb75-e076-419b-83d0-2709accd9f17","Type":"ContainerDied","Data":"7912bc6e15cab812b8efee1a8962bcc6421ee0851bf394329730fc4080dec49c"} Jan 23 07:58:48 crc kubenswrapper[4784]: I0123 07:58:48.144402 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/crc-debug-94gwf" Jan 23 07:58:48 crc kubenswrapper[4784]: I0123 07:58:48.187736 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nthhr/crc-debug-94gwf"] Jan 23 07:58:48 crc kubenswrapper[4784]: I0123 07:58:48.197163 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nthhr/crc-debug-94gwf"] Jan 23 07:58:48 crc kubenswrapper[4784]: I0123 07:58:48.243735 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a300cb75-e076-419b-83d0-2709accd9f17-host\") pod \"a300cb75-e076-419b-83d0-2709accd9f17\" (UID: \"a300cb75-e076-419b-83d0-2709accd9f17\") " Jan 23 07:58:48 crc kubenswrapper[4784]: I0123 07:58:48.243909 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a300cb75-e076-419b-83d0-2709accd9f17-host" (OuterVolumeSpecName: "host") pod "a300cb75-e076-419b-83d0-2709accd9f17" (UID: "a300cb75-e076-419b-83d0-2709accd9f17"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 07:58:48 crc kubenswrapper[4784]: I0123 07:58:48.243958 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtdgr\" (UniqueName: \"kubernetes.io/projected/a300cb75-e076-419b-83d0-2709accd9f17-kube-api-access-rtdgr\") pod \"a300cb75-e076-419b-83d0-2709accd9f17\" (UID: \"a300cb75-e076-419b-83d0-2709accd9f17\") " Jan 23 07:58:48 crc kubenswrapper[4784]: I0123 07:58:48.244644 4784 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a300cb75-e076-419b-83d0-2709accd9f17-host\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:48 crc kubenswrapper[4784]: I0123 07:58:48.251482 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a300cb75-e076-419b-83d0-2709accd9f17-kube-api-access-rtdgr" (OuterVolumeSpecName: "kube-api-access-rtdgr") pod "a300cb75-e076-419b-83d0-2709accd9f17" (UID: "a300cb75-e076-419b-83d0-2709accd9f17"). InnerVolumeSpecName "kube-api-access-rtdgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:58:48 crc kubenswrapper[4784]: I0123 07:58:48.347454 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtdgr\" (UniqueName: \"kubernetes.io/projected/a300cb75-e076-419b-83d0-2709accd9f17-kube-api-access-rtdgr\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:49 crc kubenswrapper[4784]: I0123 07:58:49.046559 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0b74bc81a211e8967d5fae5d963b264a068232fd57641232cc18343a01b83d7" Jan 23 07:58:49 crc kubenswrapper[4784]: I0123 07:58:49.046631 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/crc-debug-94gwf" Jan 23 07:58:49 crc kubenswrapper[4784]: I0123 07:58:49.268605 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a300cb75-e076-419b-83d0-2709accd9f17" path="/var/lib/kubelet/pods/a300cb75-e076-419b-83d0-2709accd9f17/volumes" Jan 23 07:58:49 crc kubenswrapper[4784]: I0123 07:58:49.424850 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nthhr/crc-debug-kh4jv"] Jan 23 07:58:49 crc kubenswrapper[4784]: E0123 07:58:49.425308 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a300cb75-e076-419b-83d0-2709accd9f17" containerName="container-00" Jan 23 07:58:49 crc kubenswrapper[4784]: I0123 07:58:49.425360 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="a300cb75-e076-419b-83d0-2709accd9f17" containerName="container-00" Jan 23 07:58:49 crc kubenswrapper[4784]: I0123 07:58:49.425538 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="a300cb75-e076-419b-83d0-2709accd9f17" containerName="container-00" Jan 23 07:58:49 crc kubenswrapper[4784]: I0123 07:58:49.426234 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/crc-debug-kh4jv" Jan 23 07:58:49 crc kubenswrapper[4784]: I0123 07:58:49.580115 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl8xh\" (UniqueName: \"kubernetes.io/projected/c0d9cd9c-eae1-4801-9bfb-905031c9f99c-kube-api-access-kl8xh\") pod \"crc-debug-kh4jv\" (UID: \"c0d9cd9c-eae1-4801-9bfb-905031c9f99c\") " pod="openshift-must-gather-nthhr/crc-debug-kh4jv" Jan 23 07:58:49 crc kubenswrapper[4784]: I0123 07:58:49.580175 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0d9cd9c-eae1-4801-9bfb-905031c9f99c-host\") pod \"crc-debug-kh4jv\" (UID: \"c0d9cd9c-eae1-4801-9bfb-905031c9f99c\") " pod="openshift-must-gather-nthhr/crc-debug-kh4jv" Jan 23 07:58:49 crc kubenswrapper[4784]: I0123 07:58:49.682946 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl8xh\" (UniqueName: \"kubernetes.io/projected/c0d9cd9c-eae1-4801-9bfb-905031c9f99c-kube-api-access-kl8xh\") pod \"crc-debug-kh4jv\" (UID: \"c0d9cd9c-eae1-4801-9bfb-905031c9f99c\") " pod="openshift-must-gather-nthhr/crc-debug-kh4jv" Jan 23 07:58:49 crc kubenswrapper[4784]: I0123 07:58:49.683013 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0d9cd9c-eae1-4801-9bfb-905031c9f99c-host\") pod \"crc-debug-kh4jv\" (UID: \"c0d9cd9c-eae1-4801-9bfb-905031c9f99c\") " pod="openshift-must-gather-nthhr/crc-debug-kh4jv" Jan 23 07:58:49 crc kubenswrapper[4784]: I0123 07:58:49.683407 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0d9cd9c-eae1-4801-9bfb-905031c9f99c-host\") pod \"crc-debug-kh4jv\" (UID: \"c0d9cd9c-eae1-4801-9bfb-905031c9f99c\") " pod="openshift-must-gather-nthhr/crc-debug-kh4jv" Jan 23 07:58:49 crc kubenswrapper[4784]: I0123 07:58:49.727853 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl8xh\" (UniqueName: \"kubernetes.io/projected/c0d9cd9c-eae1-4801-9bfb-905031c9f99c-kube-api-access-kl8xh\") pod \"crc-debug-kh4jv\" (UID: \"c0d9cd9c-eae1-4801-9bfb-905031c9f99c\") " pod="openshift-must-gather-nthhr/crc-debug-kh4jv" Jan 23 07:58:49 crc kubenswrapper[4784]: I0123 07:58:49.741636 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/crc-debug-kh4jv" Jan 23 07:58:50 crc kubenswrapper[4784]: I0123 07:58:50.062558 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nthhr/crc-debug-kh4jv" event={"ID":"c0d9cd9c-eae1-4801-9bfb-905031c9f99c","Type":"ContainerStarted","Data":"501978905cb42fb0b60588c1f4d83cda43cfd9d1870eb24604329b546e1b638b"} Jan 23 07:58:50 crc kubenswrapper[4784]: I0123 07:58:50.063138 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nthhr/crc-debug-kh4jv" event={"ID":"c0d9cd9c-eae1-4801-9bfb-905031c9f99c","Type":"ContainerStarted","Data":"eab1e28f0f062dad6e4cebf6a8852dd554b6a649e33b325e18bf05e3df3b3d2a"} Jan 23 07:58:50 crc kubenswrapper[4784]: I0123 07:58:50.081719 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nthhr/crc-debug-kh4jv" podStartSLOduration=1.081701931 podStartE2EDuration="1.081701931s" podCreationTimestamp="2026-01-23 07:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:58:50.080547835 +0000 UTC m=+5933.313055859" watchObservedRunningTime="2026-01-23 07:58:50.081701931 +0000 UTC m=+5933.314209915" Jan 23 07:58:51 crc kubenswrapper[4784]: I0123 07:58:51.073686 4784 generic.go:334] "Generic (PLEG): container finished" podID="c0d9cd9c-eae1-4801-9bfb-905031c9f99c" containerID="501978905cb42fb0b60588c1f4d83cda43cfd9d1870eb24604329b546e1b638b" exitCode=0 Jan 23 07:58:51 crc kubenswrapper[4784]: I0123 07:58:51.073730 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nthhr/crc-debug-kh4jv" event={"ID":"c0d9cd9c-eae1-4801-9bfb-905031c9f99c","Type":"ContainerDied","Data":"501978905cb42fb0b60588c1f4d83cda43cfd9d1870eb24604329b546e1b638b"} Jan 23 07:58:52 crc kubenswrapper[4784]: I0123 07:58:52.172617 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/crc-debug-kh4jv" Jan 23 07:58:52 crc kubenswrapper[4784]: I0123 07:58:52.281539 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl8xh\" (UniqueName: \"kubernetes.io/projected/c0d9cd9c-eae1-4801-9bfb-905031c9f99c-kube-api-access-kl8xh\") pod \"c0d9cd9c-eae1-4801-9bfb-905031c9f99c\" (UID: \"c0d9cd9c-eae1-4801-9bfb-905031c9f99c\") " Jan 23 07:58:52 crc kubenswrapper[4784]: I0123 07:58:52.281585 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0d9cd9c-eae1-4801-9bfb-905031c9f99c-host\") pod \"c0d9cd9c-eae1-4801-9bfb-905031c9f99c\" (UID: \"c0d9cd9c-eae1-4801-9bfb-905031c9f99c\") " Jan 23 07:58:52 crc kubenswrapper[4784]: I0123 07:58:52.282104 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0d9cd9c-eae1-4801-9bfb-905031c9f99c-host" (OuterVolumeSpecName: "host") pod "c0d9cd9c-eae1-4801-9bfb-905031c9f99c" (UID: "c0d9cd9c-eae1-4801-9bfb-905031c9f99c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 07:58:52 crc kubenswrapper[4784]: I0123 07:58:52.320438 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d9cd9c-eae1-4801-9bfb-905031c9f99c-kube-api-access-kl8xh" (OuterVolumeSpecName: "kube-api-access-kl8xh") pod "c0d9cd9c-eae1-4801-9bfb-905031c9f99c" (UID: "c0d9cd9c-eae1-4801-9bfb-905031c9f99c"). InnerVolumeSpecName "kube-api-access-kl8xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:58:52 crc kubenswrapper[4784]: I0123 07:58:52.389218 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl8xh\" (UniqueName: \"kubernetes.io/projected/c0d9cd9c-eae1-4801-9bfb-905031c9f99c-kube-api-access-kl8xh\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:52 crc kubenswrapper[4784]: I0123 07:58:52.389256 4784 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0d9cd9c-eae1-4801-9bfb-905031c9f99c-host\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:52 crc kubenswrapper[4784]: I0123 07:58:52.658243 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nthhr/crc-debug-kh4jv"] Jan 23 07:58:52 crc kubenswrapper[4784]: I0123 07:58:52.666349 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nthhr/crc-debug-kh4jv"] Jan 23 07:58:53 crc kubenswrapper[4784]: I0123 07:58:53.095322 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eab1e28f0f062dad6e4cebf6a8852dd554b6a649e33b325e18bf05e3df3b3d2a" Jan 23 07:58:53 crc kubenswrapper[4784]: I0123 07:58:53.095856 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/crc-debug-kh4jv" Jan 23 07:58:53 crc kubenswrapper[4784]: I0123 07:58:53.254429 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 07:58:53 crc kubenswrapper[4784]: E0123 07:58:53.254639 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:58:53 crc kubenswrapper[4784]: I0123 07:58:53.264356 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0d9cd9c-eae1-4801-9bfb-905031c9f99c" path="/var/lib/kubelet/pods/c0d9cd9c-eae1-4801-9bfb-905031c9f99c/volumes" Jan 23 07:58:53 crc kubenswrapper[4784]: I0123 07:58:53.885109 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nthhr/crc-debug-g72kg"] Jan 23 07:58:53 crc kubenswrapper[4784]: E0123 07:58:53.885937 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d9cd9c-eae1-4801-9bfb-905031c9f99c" containerName="container-00" Jan 23 07:58:53 crc kubenswrapper[4784]: I0123 07:58:53.885959 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d9cd9c-eae1-4801-9bfb-905031c9f99c" containerName="container-00" Jan 23 07:58:53 crc kubenswrapper[4784]: I0123 07:58:53.886230 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d9cd9c-eae1-4801-9bfb-905031c9f99c" containerName="container-00" Jan 23 07:58:53 crc kubenswrapper[4784]: I0123 07:58:53.887050 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/crc-debug-g72kg" Jan 23 07:58:53 crc kubenswrapper[4784]: I0123 07:58:53.925885 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj8qx\" (UniqueName: \"kubernetes.io/projected/06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b-kube-api-access-vj8qx\") pod \"crc-debug-g72kg\" (UID: \"06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b\") " pod="openshift-must-gather-nthhr/crc-debug-g72kg" Jan 23 07:58:53 crc kubenswrapper[4784]: I0123 07:58:53.925970 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b-host\") pod \"crc-debug-g72kg\" (UID: \"06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b\") " pod="openshift-must-gather-nthhr/crc-debug-g72kg" Jan 23 07:58:54 crc kubenswrapper[4784]: I0123 07:58:54.027656 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj8qx\" (UniqueName: \"kubernetes.io/projected/06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b-kube-api-access-vj8qx\") pod \"crc-debug-g72kg\" (UID: \"06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b\") " pod="openshift-must-gather-nthhr/crc-debug-g72kg" Jan 23 07:58:54 crc kubenswrapper[4784]: I0123 07:58:54.027747 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b-host\") pod \"crc-debug-g72kg\" (UID: \"06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b\") " pod="openshift-must-gather-nthhr/crc-debug-g72kg" Jan 23 07:58:54 crc kubenswrapper[4784]: I0123 07:58:54.027967 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b-host\") pod \"crc-debug-g72kg\" (UID: \"06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b\") " pod="openshift-must-gather-nthhr/crc-debug-g72kg" Jan 23 07:58:54 crc kubenswrapper[4784]: I0123 07:58:54.059146 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj8qx\" (UniqueName: \"kubernetes.io/projected/06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b-kube-api-access-vj8qx\") pod \"crc-debug-g72kg\" (UID: \"06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b\") " pod="openshift-must-gather-nthhr/crc-debug-g72kg" Jan 23 07:58:54 crc kubenswrapper[4784]: I0123 07:58:54.210401 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/crc-debug-g72kg" Jan 23 07:58:55 crc kubenswrapper[4784]: I0123 07:58:55.122775 4784 generic.go:334] "Generic (PLEG): container finished" podID="06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b" containerID="a126850db0907e08da63075b0229e7079a7d13b82394335fd979053d7cecef2d" exitCode=0 Jan 23 07:58:55 crc kubenswrapper[4784]: I0123 07:58:55.122871 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nthhr/crc-debug-g72kg" event={"ID":"06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b","Type":"ContainerDied","Data":"a126850db0907e08da63075b0229e7079a7d13b82394335fd979053d7cecef2d"} Jan 23 07:58:55 crc kubenswrapper[4784]: I0123 07:58:55.123152 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nthhr/crc-debug-g72kg" event={"ID":"06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b","Type":"ContainerStarted","Data":"f8902ae9e802c04302da6b364dd77b86294822c8caadbea5da3efad38068a01b"} Jan 23 07:58:55 crc kubenswrapper[4784]: I0123 07:58:55.162870 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nthhr/crc-debug-g72kg"] Jan 23 07:58:55 crc kubenswrapper[4784]: I0123 07:58:55.176324 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nthhr/crc-debug-g72kg"] Jan 23 07:58:56 crc kubenswrapper[4784]: I0123 07:58:56.249099 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/crc-debug-g72kg" Jan 23 07:58:56 crc kubenswrapper[4784]: I0123 07:58:56.374045 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b-host\") pod \"06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b\" (UID: \"06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b\") " Jan 23 07:58:56 crc kubenswrapper[4784]: I0123 07:58:56.374127 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b-host" (OuterVolumeSpecName: "host") pod "06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b" (UID: "06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 07:58:56 crc kubenswrapper[4784]: I0123 07:58:56.374208 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj8qx\" (UniqueName: \"kubernetes.io/projected/06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b-kube-api-access-vj8qx\") pod \"06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b\" (UID: \"06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b\") " Jan 23 07:58:56 crc kubenswrapper[4784]: I0123 07:58:56.376390 4784 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b-host\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:56 crc kubenswrapper[4784]: I0123 07:58:56.380569 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b-kube-api-access-vj8qx" (OuterVolumeSpecName: "kube-api-access-vj8qx") pod "06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b" (UID: "06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b"). InnerVolumeSpecName "kube-api-access-vj8qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:58:56 crc kubenswrapper[4784]: I0123 07:58:56.477688 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj8qx\" (UniqueName: \"kubernetes.io/projected/06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b-kube-api-access-vj8qx\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:57 crc kubenswrapper[4784]: I0123 07:58:57.142686 4784 scope.go:117] "RemoveContainer" containerID="a126850db0907e08da63075b0229e7079a7d13b82394335fd979053d7cecef2d" Jan 23 07:58:57 crc kubenswrapper[4784]: I0123 07:58:57.142771 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/crc-debug-g72kg" Jan 23 07:58:57 crc kubenswrapper[4784]: I0123 07:58:57.271772 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b" path="/var/lib/kubelet/pods/06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b/volumes" Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.122764 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v9jtk"] Jan 23 07:59:00 crc kubenswrapper[4784]: E0123 07:59:00.123844 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b" containerName="container-00" Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.123862 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b" containerName="container-00" Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.124119 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e66141-0dcd-4f02-bb2a-f2c3eb5f1e1b" containerName="container-00" Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.125839 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.139650 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9jtk"] Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.145069 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6447845f-e96b-4287-9040-080a7e5c9026-catalog-content\") pod \"redhat-marketplace-v9jtk\" (UID: \"6447845f-e96b-4287-9040-080a7e5c9026\") " pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.145271 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4czdf\" (UniqueName: \"kubernetes.io/projected/6447845f-e96b-4287-9040-080a7e5c9026-kube-api-access-4czdf\") pod \"redhat-marketplace-v9jtk\" (UID: \"6447845f-e96b-4287-9040-080a7e5c9026\") " pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.145641 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6447845f-e96b-4287-9040-080a7e5c9026-utilities\") pod \"redhat-marketplace-v9jtk\" (UID: \"6447845f-e96b-4287-9040-080a7e5c9026\") " pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.246940 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6447845f-e96b-4287-9040-080a7e5c9026-utilities\") pod \"redhat-marketplace-v9jtk\" (UID: \"6447845f-e96b-4287-9040-080a7e5c9026\") " pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.247030 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6447845f-e96b-4287-9040-080a7e5c9026-catalog-content\") pod \"redhat-marketplace-v9jtk\" (UID: \"6447845f-e96b-4287-9040-080a7e5c9026\") " pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.247095 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4czdf\" (UniqueName: \"kubernetes.io/projected/6447845f-e96b-4287-9040-080a7e5c9026-kube-api-access-4czdf\") pod \"redhat-marketplace-v9jtk\" (UID: \"6447845f-e96b-4287-9040-080a7e5c9026\") " pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.247403 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6447845f-e96b-4287-9040-080a7e5c9026-utilities\") pod \"redhat-marketplace-v9jtk\" (UID: \"6447845f-e96b-4287-9040-080a7e5c9026\") " pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.247533 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6447845f-e96b-4287-9040-080a7e5c9026-catalog-content\") pod \"redhat-marketplace-v9jtk\" (UID: \"6447845f-e96b-4287-9040-080a7e5c9026\") " pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.296845 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4czdf\" (UniqueName: \"kubernetes.io/projected/6447845f-e96b-4287-9040-080a7e5c9026-kube-api-access-4czdf\") pod \"redhat-marketplace-v9jtk\" (UID: \"6447845f-e96b-4287-9040-080a7e5c9026\") " pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.442118 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:00 crc kubenswrapper[4784]: I0123 07:59:00.936644 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9jtk"] Jan 23 07:59:01 crc kubenswrapper[4784]: I0123 07:59:01.183353 4784 generic.go:334] "Generic (PLEG): container finished" podID="6447845f-e96b-4287-9040-080a7e5c9026" containerID="1998fb55d2887258d48974e88986f87ccfaaca3cb4eb207ba65ce1accee8187e" exitCode=0 Jan 23 07:59:01 crc kubenswrapper[4784]: I0123 07:59:01.183392 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9jtk" event={"ID":"6447845f-e96b-4287-9040-080a7e5c9026","Type":"ContainerDied","Data":"1998fb55d2887258d48974e88986f87ccfaaca3cb4eb207ba65ce1accee8187e"} Jan 23 07:59:01 crc kubenswrapper[4784]: I0123 07:59:01.183418 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9jtk" event={"ID":"6447845f-e96b-4287-9040-080a7e5c9026","Type":"ContainerStarted","Data":"6e51624990893526c26b4c221169c4c47e2cb93265aee6c1768326f7d377fa76"} Jan 23 07:59:02 crc kubenswrapper[4784]: I0123 07:59:02.193923 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9jtk" event={"ID":"6447845f-e96b-4287-9040-080a7e5c9026","Type":"ContainerStarted","Data":"4d693f71d1539469bed882570313c3e33410e18d637327b97a1f3381aee1c254"} Jan 23 07:59:03 crc kubenswrapper[4784]: I0123 07:59:03.204854 4784 generic.go:334] "Generic (PLEG): container finished" podID="6447845f-e96b-4287-9040-080a7e5c9026" containerID="4d693f71d1539469bed882570313c3e33410e18d637327b97a1f3381aee1c254" exitCode=0 Jan 23 07:59:03 crc kubenswrapper[4784]: I0123 07:59:03.204960 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9jtk" event={"ID":"6447845f-e96b-4287-9040-080a7e5c9026","Type":"ContainerDied","Data":"4d693f71d1539469bed882570313c3e33410e18d637327b97a1f3381aee1c254"} Jan 23 07:59:04 crc kubenswrapper[4784]: I0123 07:59:04.218139 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9jtk" event={"ID":"6447845f-e96b-4287-9040-080a7e5c9026","Type":"ContainerStarted","Data":"3c8faab987a8ac165132f48054553f440a1817231bdcb11e1be7b1d184baf8cd"} Jan 23 07:59:04 crc kubenswrapper[4784]: I0123 07:59:04.256543 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 07:59:04 crc kubenswrapper[4784]: E0123 07:59:04.257110 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:59:04 crc kubenswrapper[4784]: I0123 07:59:04.275248 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v9jtk" podStartSLOduration=1.523366225 podStartE2EDuration="4.275225984s" podCreationTimestamp="2026-01-23 07:59:00 +0000 UTC" firstStartedPulling="2026-01-23 07:59:01.185416141 +0000 UTC m=+5944.417924135" lastFinishedPulling="2026-01-23 07:59:03.93727591 +0000 UTC m=+5947.169783894" observedRunningTime="2026-01-23 07:59:04.238201924 +0000 UTC m=+5947.470709898" watchObservedRunningTime="2026-01-23 07:59:04.275225984 +0000 UTC m=+5947.507733958" Jan 23 07:59:10 crc kubenswrapper[4784]: I0123 07:59:10.443230 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:10 crc kubenswrapper[4784]: I0123 07:59:10.445101 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:10 crc kubenswrapper[4784]: I0123 07:59:10.516386 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:11 crc kubenswrapper[4784]: I0123 07:59:11.362344 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:11 crc kubenswrapper[4784]: I0123 07:59:11.424555 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9jtk"] Jan 23 07:59:13 crc kubenswrapper[4784]: I0123 07:59:13.332261 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v9jtk" podUID="6447845f-e96b-4287-9040-080a7e5c9026" containerName="registry-server" containerID="cri-o://3c8faab987a8ac165132f48054553f440a1817231bdcb11e1be7b1d184baf8cd" gracePeriod=2 Jan 23 07:59:13 crc kubenswrapper[4784]: I0123 07:59:13.823395 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:13 crc kubenswrapper[4784]: I0123 07:59:13.931879 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4czdf\" (UniqueName: \"kubernetes.io/projected/6447845f-e96b-4287-9040-080a7e5c9026-kube-api-access-4czdf\") pod \"6447845f-e96b-4287-9040-080a7e5c9026\" (UID: \"6447845f-e96b-4287-9040-080a7e5c9026\") " Jan 23 07:59:13 crc kubenswrapper[4784]: I0123 07:59:13.932041 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6447845f-e96b-4287-9040-080a7e5c9026-catalog-content\") pod \"6447845f-e96b-4287-9040-080a7e5c9026\" (UID: \"6447845f-e96b-4287-9040-080a7e5c9026\") " Jan 23 07:59:13 crc kubenswrapper[4784]: I0123 07:59:13.932138 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6447845f-e96b-4287-9040-080a7e5c9026-utilities\") pod \"6447845f-e96b-4287-9040-080a7e5c9026\" (UID: \"6447845f-e96b-4287-9040-080a7e5c9026\") " Jan 23 07:59:13 crc kubenswrapper[4784]: I0123 07:59:13.933430 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6447845f-e96b-4287-9040-080a7e5c9026-utilities" (OuterVolumeSpecName: "utilities") pod "6447845f-e96b-4287-9040-080a7e5c9026" (UID: "6447845f-e96b-4287-9040-080a7e5c9026"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:59:13 crc kubenswrapper[4784]: I0123 07:59:13.947056 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6447845f-e96b-4287-9040-080a7e5c9026-kube-api-access-4czdf" (OuterVolumeSpecName: "kube-api-access-4czdf") pod "6447845f-e96b-4287-9040-080a7e5c9026" (UID: "6447845f-e96b-4287-9040-080a7e5c9026"). InnerVolumeSpecName "kube-api-access-4czdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:59:13 crc kubenswrapper[4784]: I0123 07:59:13.953458 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6447845f-e96b-4287-9040-080a7e5c9026-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6447845f-e96b-4287-9040-080a7e5c9026" (UID: "6447845f-e96b-4287-9040-080a7e5c9026"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.034187 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6447845f-e96b-4287-9040-080a7e5c9026-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.034222 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6447845f-e96b-4287-9040-080a7e5c9026-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.034231 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4czdf\" (UniqueName: \"kubernetes.io/projected/6447845f-e96b-4287-9040-080a7e5c9026-kube-api-access-4czdf\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.348883 4784 generic.go:334] "Generic (PLEG): container finished" podID="6447845f-e96b-4287-9040-080a7e5c9026" containerID="3c8faab987a8ac165132f48054553f440a1817231bdcb11e1be7b1d184baf8cd" exitCode=0 Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.348948 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9jtk" event={"ID":"6447845f-e96b-4287-9040-080a7e5c9026","Type":"ContainerDied","Data":"3c8faab987a8ac165132f48054553f440a1817231bdcb11e1be7b1d184baf8cd"} Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.348981 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9jtk" Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.349011 4784 scope.go:117] "RemoveContainer" containerID="3c8faab987a8ac165132f48054553f440a1817231bdcb11e1be7b1d184baf8cd" Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.348992 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9jtk" event={"ID":"6447845f-e96b-4287-9040-080a7e5c9026","Type":"ContainerDied","Data":"6e51624990893526c26b4c221169c4c47e2cb93265aee6c1768326f7d377fa76"} Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.387174 4784 scope.go:117] "RemoveContainer" containerID="4d693f71d1539469bed882570313c3e33410e18d637327b97a1f3381aee1c254" Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.409955 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9jtk"] Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.419737 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9jtk"] Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.437844 4784 scope.go:117] "RemoveContainer" containerID="1998fb55d2887258d48974e88986f87ccfaaca3cb4eb207ba65ce1accee8187e" Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.495170 4784 scope.go:117] "RemoveContainer" containerID="3c8faab987a8ac165132f48054553f440a1817231bdcb11e1be7b1d184baf8cd" Jan 23 07:59:14 crc kubenswrapper[4784]: E0123 07:59:14.495695 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c8faab987a8ac165132f48054553f440a1817231bdcb11e1be7b1d184baf8cd\": container with ID starting with 3c8faab987a8ac165132f48054553f440a1817231bdcb11e1be7b1d184baf8cd not found: ID does not exist" containerID="3c8faab987a8ac165132f48054553f440a1817231bdcb11e1be7b1d184baf8cd" Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.495746 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c8faab987a8ac165132f48054553f440a1817231bdcb11e1be7b1d184baf8cd"} err="failed to get container status \"3c8faab987a8ac165132f48054553f440a1817231bdcb11e1be7b1d184baf8cd\": rpc error: code = NotFound desc = could not find container \"3c8faab987a8ac165132f48054553f440a1817231bdcb11e1be7b1d184baf8cd\": container with ID starting with 3c8faab987a8ac165132f48054553f440a1817231bdcb11e1be7b1d184baf8cd not found: ID does not exist" Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.495912 4784 scope.go:117] "RemoveContainer" containerID="4d693f71d1539469bed882570313c3e33410e18d637327b97a1f3381aee1c254" Jan 23 07:59:14 crc kubenswrapper[4784]: E0123 07:59:14.496360 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d693f71d1539469bed882570313c3e33410e18d637327b97a1f3381aee1c254\": container with ID starting with 4d693f71d1539469bed882570313c3e33410e18d637327b97a1f3381aee1c254 not found: ID does not exist" containerID="4d693f71d1539469bed882570313c3e33410e18d637327b97a1f3381aee1c254" Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.496414 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d693f71d1539469bed882570313c3e33410e18d637327b97a1f3381aee1c254"} err="failed to get container status \"4d693f71d1539469bed882570313c3e33410e18d637327b97a1f3381aee1c254\": rpc error: code = NotFound desc = could not find container \"4d693f71d1539469bed882570313c3e33410e18d637327b97a1f3381aee1c254\": container with ID starting with 4d693f71d1539469bed882570313c3e33410e18d637327b97a1f3381aee1c254 not found: ID does not exist" Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.496449 4784 scope.go:117] "RemoveContainer" containerID="1998fb55d2887258d48974e88986f87ccfaaca3cb4eb207ba65ce1accee8187e" Jan 23 07:59:14 crc kubenswrapper[4784]: E0123 07:59:14.496957 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1998fb55d2887258d48974e88986f87ccfaaca3cb4eb207ba65ce1accee8187e\": container with ID starting with 1998fb55d2887258d48974e88986f87ccfaaca3cb4eb207ba65ce1accee8187e not found: ID does not exist" containerID="1998fb55d2887258d48974e88986f87ccfaaca3cb4eb207ba65ce1accee8187e" Jan 23 07:59:14 crc kubenswrapper[4784]: I0123 07:59:14.496987 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1998fb55d2887258d48974e88986f87ccfaaca3cb4eb207ba65ce1accee8187e"} err="failed to get container status \"1998fb55d2887258d48974e88986f87ccfaaca3cb4eb207ba65ce1accee8187e\": rpc error: code = NotFound desc = could not find container \"1998fb55d2887258d48974e88986f87ccfaaca3cb4eb207ba65ce1accee8187e\": container with ID starting with 1998fb55d2887258d48974e88986f87ccfaaca3cb4eb207ba65ce1accee8187e not found: ID does not exist" Jan 23 07:59:15 crc kubenswrapper[4784]: I0123 07:59:15.275364 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6447845f-e96b-4287-9040-080a7e5c9026" path="/var/lib/kubelet/pods/6447845f-e96b-4287-9040-080a7e5c9026/volumes" Jan 23 07:59:17 crc kubenswrapper[4784]: I0123 07:59:17.272006 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 07:59:17 crc kubenswrapper[4784]: E0123 07:59:17.272815 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:59:24 crc kubenswrapper[4784]: I0123 07:59:24.392859 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-b88c57956-78khw_56a8c456-d460-464b-9425-0d5878f12ba5/barbican-api/0.log" Jan 23 07:59:24 crc kubenswrapper[4784]: I0123 07:59:24.517432 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-b88c57956-78khw_56a8c456-d460-464b-9425-0d5878f12ba5/barbican-api-log/0.log" Jan 23 07:59:24 crc kubenswrapper[4784]: I0123 07:59:24.584962 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-64f9677dc8-64nzl_11cf409f-f9ae-4e80-87db-66495679cf86/barbican-keystone-listener/0.log" Jan 23 07:59:24 crc kubenswrapper[4784]: I0123 07:59:24.711575 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-64f9677dc8-64nzl_11cf409f-f9ae-4e80-87db-66495679cf86/barbican-keystone-listener-log/0.log" Jan 23 07:59:24 crc kubenswrapper[4784]: I0123 07:59:24.804463 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fd9757bf9-7tmmd_06b51980-afe8-4434-b345-022a3be8f449/barbican-worker/0.log" Jan 23 07:59:24 crc kubenswrapper[4784]: I0123 07:59:24.885787 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fd9757bf9-7tmmd_06b51980-afe8-4434-b345-022a3be8f449/barbican-worker-log/0.log" Jan 23 07:59:25 crc kubenswrapper[4784]: I0123 07:59:25.024105 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kbmgz_64311990-e01e-4553-89da-a3c7bb54b63c/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:25 crc kubenswrapper[4784]: I0123 07:59:25.126936 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_263b6093-4133-4159-b83a-32199b46fa5d/ceilometer-central-agent/1.log" Jan 23 07:59:25 crc kubenswrapper[4784]: I0123 07:59:25.218153 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_263b6093-4133-4159-b83a-32199b46fa5d/ceilometer-central-agent/0.log" Jan 23 07:59:25 crc kubenswrapper[4784]: I0123 07:59:25.280387 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_263b6093-4133-4159-b83a-32199b46fa5d/ceilometer-notification-agent/1.log" Jan 23 07:59:25 crc kubenswrapper[4784]: I0123 07:59:25.285379 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_263b6093-4133-4159-b83a-32199b46fa5d/ceilometer-notification-agent/0.log" Jan 23 07:59:25 crc kubenswrapper[4784]: I0123 07:59:25.353264 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_263b6093-4133-4159-b83a-32199b46fa5d/proxy-httpd/0.log" Jan 23 07:59:25 crc kubenswrapper[4784]: I0123 07:59:25.396422 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_263b6093-4133-4159-b83a-32199b46fa5d/sg-core/0.log" Jan 23 07:59:25 crc kubenswrapper[4784]: I0123 07:59:25.559626 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_dfb6df04-2d1b-4058-b54e-122d31b83c46/cinder-api-log/0.log" Jan 23 07:59:25 crc kubenswrapper[4784]: I0123 07:59:25.651109 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_dfb6df04-2d1b-4058-b54e-122d31b83c46/cinder-api/0.log" Jan 23 07:59:25 crc kubenswrapper[4784]: I0123 07:59:25.724066 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_87ac961b-d41b-43ef-b55e-07b0cf093e56/cinder-scheduler/1.log" Jan 23 07:59:25 crc kubenswrapper[4784]: I0123 07:59:25.802202 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_87ac961b-d41b-43ef-b55e-07b0cf093e56/cinder-scheduler/0.log" Jan 23 07:59:25 crc kubenswrapper[4784]: I0123 07:59:25.878406 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_87ac961b-d41b-43ef-b55e-07b0cf093e56/probe/0.log" Jan 23 07:59:25 crc kubenswrapper[4784]: I0123 07:59:25.958399 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-vbwk4_5c868e7c-e48a-4534-a594-a785fcd2e39e/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:26 crc kubenswrapper[4784]: I0123 07:59:26.127062 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-s8vr2_f202f8c4-5d8c-4cca-a9f6-ebf39f16cead/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:26 crc kubenswrapper[4784]: I0123 07:59:26.265694 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-s7ths_1b313cca-4d7d-435b-9c85-8ca53f4b4bf1/init/0.log" Jan 23 07:59:26 crc kubenswrapper[4784]: I0123 07:59:26.401983 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-s7ths_1b313cca-4d7d-435b-9c85-8ca53f4b4bf1/init/0.log" Jan 23 07:59:26 crc kubenswrapper[4784]: I0123 07:59:26.523715 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-8nzrh_8183fd3f-f4c4-45b4-950d-c12e94455abe/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:26 crc kubenswrapper[4784]: I0123 07:59:26.628274 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6cd9bffc9-s7ths_1b313cca-4d7d-435b-9c85-8ca53f4b4bf1/dnsmasq-dns/0.log" Jan 23 07:59:26 crc kubenswrapper[4784]: I0123 07:59:26.765336 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_01e78a8b-1136-4b2e-9d1d-20533086ea3e/glance-log/0.log" Jan 23 07:59:26 crc kubenswrapper[4784]: I0123 07:59:26.944781 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_21429e2a-c0f1-47fa-8a30-0577e1e9e72c/glance-log/0.log" Jan 23 07:59:26 crc kubenswrapper[4784]: I0123 07:59:26.978675 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_21429e2a-c0f1-47fa-8a30-0577e1e9e72c/glance-httpd/0.log" Jan 23 07:59:26 crc kubenswrapper[4784]: I0123 07:59:26.981948 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_01e78a8b-1136-4b2e-9d1d-20533086ea3e/glance-httpd/0.log" Jan 23 07:59:27 crc kubenswrapper[4784]: I0123 07:59:27.234859 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-65775dd4cd-wxtf2_9a1391cd-fdf4-4770-ba43-17cb0657e117/horizon/0.log" Jan 23 07:59:27 crc kubenswrapper[4784]: I0123 07:59:27.443416 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-k75hw_b38255f5-498b-4d24-9754-1c994d7b260c/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:27 crc kubenswrapper[4784]: I0123 07:59:27.581002 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-4vzgt_0402642d-23da-49d9-9175-8bff0326b7fd/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:27 crc kubenswrapper[4784]: I0123 07:59:27.929407 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-65775dd4cd-wxtf2_9a1391cd-fdf4-4770-ba43-17cb0657e117/horizon-log/0.log" Jan 23 07:59:27 crc kubenswrapper[4784]: I0123 07:59:27.940867 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29485861-48jdw_b2e51175-c98c-49a9-ac8b-511b91913b99/keystone-cron/0.log" Jan 23 07:59:28 crc kubenswrapper[4784]: I0123 07:59:28.102374 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5959d8d8f9-nvgzc_82f608f8-8c09-4f0a-b618-6a90c4d2794f/keystone-api/0.log" Jan 23 07:59:28 crc kubenswrapper[4784]: I0123 07:59:28.143057 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4e84d3df-4011-472a-9b95-9ed21dea27d5/kube-state-metrics/0.log" Jan 23 07:59:28 crc kubenswrapper[4784]: I0123 07:59:28.165987 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4e84d3df-4011-472a-9b95-9ed21dea27d5/kube-state-metrics/1.log" Jan 23 07:59:28 crc kubenswrapper[4784]: I0123 07:59:28.310410 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-f6mpw_fe48ab60-daab-4f78-8276-76ddc1745644/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:28 crc kubenswrapper[4784]: I0123 07:59:28.691476 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-62tjr_f9d1c448-c73e-4e10-8265-5c19080dc923/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:28 crc kubenswrapper[4784]: I0123 07:59:28.695689 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c68c7795c-7p5x6_5d7a679e-f4a7-4d19-89bb-2140b97e32ed/neutron-httpd/0.log" Jan 23 07:59:28 crc kubenswrapper[4784]: I0123 07:59:28.700157 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c68c7795c-7p5x6_5d7a679e-f4a7-4d19-89bb-2140b97e32ed/neutron-api/0.log" Jan 23 07:59:29 crc kubenswrapper[4784]: I0123 07:59:29.191972 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_49ca1bfc-7758-4d5d-85f5-1ffd4ee430ba/nova-cell0-conductor-conductor/0.log" Jan 23 07:59:29 crc kubenswrapper[4784]: I0123 07:59:29.583038 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d47922f2-9fc2-41d3-bd0b-7df1a238a218/nova-cell1-conductor-conductor/0.log" Jan 23 07:59:29 crc kubenswrapper[4784]: I0123 07:59:29.833622 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_871d6e64-73c9-4a77-8bae-8c96cad28acb/nova-cell1-novncproxy-novncproxy/0.log" Jan 23 07:59:30 crc kubenswrapper[4784]: I0123 07:59:30.101176 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-v8n5s_434b967e-b70f-4fae-9cec-5c7f6b78c5d2/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:30 crc kubenswrapper[4784]: I0123 07:59:30.121063 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fd7b402f-9e10-4056-9911-be0cbb5fab92/nova-api-log/0.log" Jan 23 07:59:30 crc kubenswrapper[4784]: I0123 07:59:30.344315 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fd7b402f-9e10-4056-9911-be0cbb5fab92/nova-api-api/0.log" Jan 23 07:59:30 crc kubenswrapper[4784]: I0123 07:59:30.401241 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a52c9612-2f18-438f-aacb-5f9ec3c24082/nova-metadata-log/0.log" Jan 23 07:59:30 crc kubenswrapper[4784]: I0123 07:59:30.610011 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8f66f97d-f8a6-4316-ba8b-cbbd922a1655/mysql-bootstrap/0.log" Jan 23 07:59:30 crc kubenswrapper[4784]: I0123 07:59:30.854765 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8f66f97d-f8a6-4316-ba8b-cbbd922a1655/mysql-bootstrap/0.log" Jan 23 07:59:30 crc kubenswrapper[4784]: I0123 07:59:30.872488 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a834a7c0-2d61-4cae-a9aa-b9d79f2d92e6/nova-scheduler-scheduler/0.log" Jan 23 07:59:30 crc kubenswrapper[4784]: I0123 07:59:30.902103 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8f66f97d-f8a6-4316-ba8b-cbbd922a1655/galera/0.log" Jan 23 07:59:31 crc kubenswrapper[4784]: I0123 07:59:31.074269 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_85680fc8-18ee-4984-8bdb-a489d1e71d39/mysql-bootstrap/0.log" Jan 23 07:59:31 crc kubenswrapper[4784]: I0123 07:59:31.256800 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 07:59:31 crc kubenswrapper[4784]: E0123 07:59:31.257111 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:59:31 crc kubenswrapper[4784]: I0123 07:59:31.355739 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_85680fc8-18ee-4984-8bdb-a489d1e71d39/mysql-bootstrap/0.log" Jan 23 07:59:31 crc kubenswrapper[4784]: I0123 07:59:31.382359 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_85680fc8-18ee-4984-8bdb-a489d1e71d39/galera/0.log" Jan 23 07:59:31 crc kubenswrapper[4784]: I0123 07:59:31.534647 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b6a24fa8-a5c2-4812-97c2-685330a66205/openstackclient/0.log" Jan 23 07:59:31 crc kubenswrapper[4784]: I0123 07:59:31.648855 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dhqg4_63a3df2f-490b-4cac-89f8-bec049380a07/openstack-network-exporter/0.log" Jan 23 07:59:32 crc kubenswrapper[4784]: I0123 07:59:32.024191 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k5dcn_9852b9db-9435-4bdd-a282-7727fd01a651/ovsdb-server-init/0.log" Jan 23 07:59:32 crc kubenswrapper[4784]: I0123 07:59:32.211652 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k5dcn_9852b9db-9435-4bdd-a282-7727fd01a651/ovs-vswitchd/0.log" Jan 23 07:59:32 crc kubenswrapper[4784]: I0123 07:59:32.235570 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k5dcn_9852b9db-9435-4bdd-a282-7727fd01a651/ovsdb-server-init/0.log" Jan 23 07:59:32 crc kubenswrapper[4784]: I0123 07:59:32.282084 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k5dcn_9852b9db-9435-4bdd-a282-7727fd01a651/ovsdb-server/0.log" Jan 23 07:59:32 crc kubenswrapper[4784]: I0123 07:59:32.435626 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sj5dx_0d8e3d77-6347-49cf-9ffa-335c063b8f12/ovn-controller/0.log" Jan 23 07:59:32 crc kubenswrapper[4784]: I0123 07:59:32.593149 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a52c9612-2f18-438f-aacb-5f9ec3c24082/nova-metadata-metadata/0.log" Jan 23 07:59:32 crc kubenswrapper[4784]: I0123 07:59:32.690310 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-vg9vh_cb18257b-963a-49bb-a493-0da8a460532f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:32 crc kubenswrapper[4784]: I0123 07:59:32.772347 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_73417c1c-ce94-42f8-bdcb-6db903adc851/openstack-network-exporter/0.log" Jan 23 07:59:32 crc kubenswrapper[4784]: I0123 07:59:32.817298 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_73417c1c-ce94-42f8-bdcb-6db903adc851/ovn-northd/0.log" Jan 23 07:59:32 crc kubenswrapper[4784]: I0123 07:59:32.966221 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2317a2c2-318f-46c1-98d0-61c93c840b91/openstack-network-exporter/0.log" Jan 23 07:59:33 crc kubenswrapper[4784]: I0123 07:59:33.037485 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2317a2c2-318f-46c1-98d0-61c93c840b91/ovsdbserver-nb/0.log" Jan 23 07:59:33 crc kubenswrapper[4784]: I0123 07:59:33.144085 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b59b602d-4a20-4b11-8577-d13582d30ce8/openstack-network-exporter/0.log" Jan 23 07:59:33 crc kubenswrapper[4784]: I0123 07:59:33.204054 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b59b602d-4a20-4b11-8577-d13582d30ce8/ovsdbserver-sb/0.log" Jan 23 07:59:33 crc kubenswrapper[4784]: I0123 07:59:33.544885 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65d5f4f9bd-jjkgn_20502d07-c74c-4f56-9ea3-10bc8746f31b/placement-api/0.log" Jan 23 07:59:33 crc kubenswrapper[4784]: I0123 07:59:33.587508 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65d5f4f9bd-jjkgn_20502d07-c74c-4f56-9ea3-10bc8746f31b/placement-log/0.log" Jan 23 07:59:33 crc kubenswrapper[4784]: I0123 07:59:33.638674 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0ac872c1-b445-4e65-bb7a-47962509618c/init-config-reloader/0.log" Jan 23 07:59:33 crc kubenswrapper[4784]: I0123 07:59:33.756491 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0ac872c1-b445-4e65-bb7a-47962509618c/config-reloader/0.log" Jan 23 07:59:33 crc kubenswrapper[4784]: I0123 07:59:33.767276 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0ac872c1-b445-4e65-bb7a-47962509618c/init-config-reloader/0.log" Jan 23 07:59:33 crc kubenswrapper[4784]: I0123 07:59:33.844866 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0ac872c1-b445-4e65-bb7a-47962509618c/prometheus/0.log" Jan 23 07:59:33 crc kubenswrapper[4784]: I0123 07:59:33.871863 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0ac872c1-b445-4e65-bb7a-47962509618c/thanos-sidecar/0.log" Jan 23 07:59:34 crc kubenswrapper[4784]: I0123 07:59:34.018551 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_807272ae-7f38-45f1-acd2-984a1a1840d8/setup-container/0.log" Jan 23 07:59:34 crc kubenswrapper[4784]: I0123 07:59:34.287043 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_807272ae-7f38-45f1-acd2-984a1a1840d8/setup-container/0.log" Jan 23 07:59:34 crc kubenswrapper[4784]: I0123 07:59:34.348052 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f65404b7-5dd6-409f-87c1-633679f2d5cb/setup-container/0.log" Jan 23 07:59:34 crc kubenswrapper[4784]: I0123 07:59:34.353386 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_807272ae-7f38-45f1-acd2-984a1a1840d8/rabbitmq/0.log" Jan 23 07:59:34 crc kubenswrapper[4784]: I0123 07:59:34.549562 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f65404b7-5dd6-409f-87c1-633679f2d5cb/rabbitmq/0.log" Jan 23 07:59:34 crc kubenswrapper[4784]: I0123 07:59:34.584885 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-xhdkb_8a2d0f94-cbd7-4ff7-9fd0-53a9ac80ed10/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:34 crc kubenswrapper[4784]: I0123 07:59:34.592343 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f65404b7-5dd6-409f-87c1-633679f2d5cb/setup-container/0.log" Jan 23 07:59:34 crc kubenswrapper[4784]: I0123 07:59:34.846662 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-pbcbm_b1c57ee2-9b78-4fd3-a5d8-e46caf648c4f/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:34 crc kubenswrapper[4784]: I0123 07:59:34.920004 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-kv77w_0ecbc8c8-6db3-43c4-8b23-e2f7d72082c4/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:35 crc kubenswrapper[4784]: I0123 07:59:35.048589 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-svmtn_ebc0675f-b9ae-44e0-bfb8-601977c9936c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:35 crc kubenswrapper[4784]: I0123 07:59:35.177011 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-zfrjd_0546d855-6190-43a0-8fd3-7897c1c9dc80/ssh-known-hosts-edpm-deployment/0.log" Jan 23 07:59:35 crc kubenswrapper[4784]: I0123 07:59:35.431330 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-856bb5496c-5hkpt_bfac942c-ab7e-42a0-8091-29079fd4da0e/proxy-server/0.log" Jan 23 07:59:35 crc kubenswrapper[4784]: I0123 07:59:35.487845 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-v8cqj_008ddd6f-ae82-41ee-a0d7-ad63e2880889/swift-ring-rebalance/0.log" Jan 23 07:59:35 crc kubenswrapper[4784]: I0123 07:59:35.564374 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-856bb5496c-5hkpt_bfac942c-ab7e-42a0-8091-29079fd4da0e/proxy-httpd/0.log" Jan 23 07:59:35 crc kubenswrapper[4784]: I0123 07:59:35.661721 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_abb5c886-7378-4bdd-b56a-cc803db75cbd/account-auditor/0.log" Jan 23 07:59:35 crc kubenswrapper[4784]: I0123 07:59:35.742609 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_abb5c886-7378-4bdd-b56a-cc803db75cbd/account-reaper/0.log" Jan 23 07:59:35 crc kubenswrapper[4784]: I0123 07:59:35.858365 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_abb5c886-7378-4bdd-b56a-cc803db75cbd/account-replicator/0.log" Jan 23 07:59:35 crc kubenswrapper[4784]: I0123 07:59:35.866985 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_abb5c886-7378-4bdd-b56a-cc803db75cbd/account-server/0.log" Jan 23 07:59:35 crc kubenswrapper[4784]: I0123 07:59:35.920176 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_abb5c886-7378-4bdd-b56a-cc803db75cbd/container-auditor/0.log" Jan 23 07:59:36 crc kubenswrapper[4784]: I0123 07:59:36.048092 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_abb5c886-7378-4bdd-b56a-cc803db75cbd/container-replicator/0.log" Jan 23 07:59:36 crc kubenswrapper[4784]: I0123 07:59:36.087878 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_abb5c886-7378-4bdd-b56a-cc803db75cbd/container-updater/0.log" Jan 23 07:59:36 crc kubenswrapper[4784]: I0123 07:59:36.110772 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_abb5c886-7378-4bdd-b56a-cc803db75cbd/container-server/0.log" Jan 23 07:59:36 crc kubenswrapper[4784]: I0123 07:59:36.184573 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_abb5c886-7378-4bdd-b56a-cc803db75cbd/object-auditor/0.log" Jan 23 07:59:36 crc kubenswrapper[4784]: I0123 07:59:36.278285 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_abb5c886-7378-4bdd-b56a-cc803db75cbd/object-expirer/0.log" Jan 23 07:59:36 crc kubenswrapper[4784]: I0123 07:59:36.326019 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_abb5c886-7378-4bdd-b56a-cc803db75cbd/object-server/0.log" Jan 23 07:59:36 crc kubenswrapper[4784]: I0123 07:59:36.328705 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_abb5c886-7378-4bdd-b56a-cc803db75cbd/object-replicator/0.log" Jan 23 07:59:36 crc kubenswrapper[4784]: I0123 07:59:36.438334 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_abb5c886-7378-4bdd-b56a-cc803db75cbd/object-updater/0.log" Jan 23 07:59:36 crc kubenswrapper[4784]: I0123 07:59:36.534184 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_abb5c886-7378-4bdd-b56a-cc803db75cbd/rsync/0.log" Jan 23 07:59:36 crc kubenswrapper[4784]: I0123 07:59:36.593250 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_abb5c886-7378-4bdd-b56a-cc803db75cbd/swift-recon-cron/0.log" Jan 23 07:59:36 crc kubenswrapper[4784]: I0123 07:59:36.758622 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-nf9kl_39095423-09e7-4099-8256-b1eab02f4707/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:36 crc kubenswrapper[4784]: I0123 07:59:36.893349 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a1b6a74a-9c47-4dab-aa71-9ca73ef1dbcc/tempest-tests-tempest-tests-runner/0.log" Jan 23 07:59:36 crc kubenswrapper[4784]: I0123 07:59:36.985372 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_19d1621c-2a08-4b32-8039-f3ba8d4ea222/test-operator-logs-container/0.log" Jan 23 07:59:37 crc kubenswrapper[4784]: I0123 07:59:37.100148 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-n9xqd_0199cc2a-5880-4f6e-b157-23bf20f33487/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 07:59:37 crc kubenswrapper[4784]: I0123 07:59:37.909043 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_1ac1f415-cec7-4110-a87a-9a725a6bf7bb/watcher-applier/0.log" Jan 23 07:59:38 crc kubenswrapper[4784]: I0123 07:59:38.275163 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_727a8885-767d-45ab-a5d7-52a44e0d3823/watcher-api-log/0.log" Jan 23 07:59:39 crc kubenswrapper[4784]: I0123 07:59:39.119078 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_5394d5ac-2fa5-4720-9b3e-b392db36e106/watcher-decision-engine/0.log" Jan 23 07:59:41 crc kubenswrapper[4784]: I0123 07:59:41.593067 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_727a8885-767d-45ab-a5d7-52a44e0d3823/watcher-api/0.log" Jan 23 07:59:44 crc kubenswrapper[4784]: I0123 07:59:44.253838 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 07:59:44 crc kubenswrapper[4784]: E0123 07:59:44.254428 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 07:59:46 crc kubenswrapper[4784]: I0123 07:59:46.826045 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e8477c9f-b8db-4b9e-bf60-1a614700e001/memcached/0.log" Jan 23 07:59:59 crc kubenswrapper[4784]: I0123 07:59:59.254581 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 07:59:59 crc kubenswrapper[4784]: E0123 07:59:59.255538 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.166597 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl"] Jan 23 08:00:00 crc kubenswrapper[4784]: E0123 08:00:00.167205 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6447845f-e96b-4287-9040-080a7e5c9026" containerName="extract-content" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.167233 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="6447845f-e96b-4287-9040-080a7e5c9026" containerName="extract-content" Jan 23 08:00:00 crc kubenswrapper[4784]: E0123 08:00:00.167280 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6447845f-e96b-4287-9040-080a7e5c9026" containerName="registry-server" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.167293 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="6447845f-e96b-4287-9040-080a7e5c9026" containerName="registry-server" Jan 23 08:00:00 crc kubenswrapper[4784]: E0123 08:00:00.167319 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6447845f-e96b-4287-9040-080a7e5c9026" containerName="extract-utilities" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.167331 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="6447845f-e96b-4287-9040-080a7e5c9026" containerName="extract-utilities" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.167685 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="6447845f-e96b-4287-9040-080a7e5c9026" containerName="registry-server" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.168783 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.172642 4784 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.173926 4784 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.182450 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl"] Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.262945 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-config-volume\") pod \"collect-profiles-29485920-4lkxl\" (UID: \"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.263053 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-secret-volume\") pod \"collect-profiles-29485920-4lkxl\" (UID: \"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.263197 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8g2x\" (UniqueName: \"kubernetes.io/projected/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-kube-api-access-v8g2x\") pod \"collect-profiles-29485920-4lkxl\" (UID: \"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.364854 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8g2x\" (UniqueName: \"kubernetes.io/projected/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-kube-api-access-v8g2x\") pod \"collect-profiles-29485920-4lkxl\" (UID: \"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.365115 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-config-volume\") pod \"collect-profiles-29485920-4lkxl\" (UID: \"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.365220 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-secret-volume\") pod \"collect-profiles-29485920-4lkxl\" (UID: \"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.367343 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-config-volume\") pod \"collect-profiles-29485920-4lkxl\" (UID: \"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.372688 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-secret-volume\") pod \"collect-profiles-29485920-4lkxl\" (UID: \"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.385133 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8g2x\" (UniqueName: \"kubernetes.io/projected/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-kube-api-access-v8g2x\") pod \"collect-profiles-29485920-4lkxl\" (UID: \"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.499381 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" Jan 23 08:00:00 crc kubenswrapper[4784]: I0123 08:00:00.969293 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl"] Jan 23 08:00:01 crc kubenswrapper[4784]: I0123 08:00:01.807626 4784 generic.go:334] "Generic (PLEG): container finished" podID="1dd6c6b0-eee4-4c2a-8b4c-873ca7128472" containerID="4e09f480ced3447ca9b5360166432786db3c395f88522592b660459ce661c217" exitCode=0 Jan 23 08:00:01 crc kubenswrapper[4784]: I0123 08:00:01.807726 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" event={"ID":"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472","Type":"ContainerDied","Data":"4e09f480ced3447ca9b5360166432786db3c395f88522592b660459ce661c217"} Jan 23 08:00:01 crc kubenswrapper[4784]: I0123 08:00:01.807888 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" event={"ID":"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472","Type":"ContainerStarted","Data":"9e19e0f98f279a78fdbbd4b793979bc49d5a174b5f94e213fc1d4b8d25952680"} Jan 23 08:00:03 crc kubenswrapper[4784]: I0123 08:00:03.138471 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" Jan 23 08:00:03 crc kubenswrapper[4784]: I0123 08:00:03.223619 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8g2x\" (UniqueName: \"kubernetes.io/projected/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-kube-api-access-v8g2x\") pod \"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472\" (UID: \"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472\") " Jan 23 08:00:03 crc kubenswrapper[4784]: I0123 08:00:03.223894 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-secret-volume\") pod \"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472\" (UID: \"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472\") " Jan 23 08:00:03 crc kubenswrapper[4784]: I0123 08:00:03.223998 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-config-volume\") pod \"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472\" (UID: \"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472\") " Jan 23 08:00:03 crc kubenswrapper[4784]: I0123 08:00:03.224595 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-config-volume" (OuterVolumeSpecName: "config-volume") pod "1dd6c6b0-eee4-4c2a-8b4c-873ca7128472" (UID: "1dd6c6b0-eee4-4c2a-8b4c-873ca7128472"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:00:03 crc kubenswrapper[4784]: I0123 08:00:03.224850 4784 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:03 crc kubenswrapper[4784]: I0123 08:00:03.229545 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1dd6c6b0-eee4-4c2a-8b4c-873ca7128472" (UID: "1dd6c6b0-eee4-4c2a-8b4c-873ca7128472"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:00:03 crc kubenswrapper[4784]: I0123 08:00:03.229918 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-kube-api-access-v8g2x" (OuterVolumeSpecName: "kube-api-access-v8g2x") pod "1dd6c6b0-eee4-4c2a-8b4c-873ca7128472" (UID: "1dd6c6b0-eee4-4c2a-8b4c-873ca7128472"). InnerVolumeSpecName "kube-api-access-v8g2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:00:03 crc kubenswrapper[4784]: I0123 08:00:03.326383 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8g2x\" (UniqueName: \"kubernetes.io/projected/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-kube-api-access-v8g2x\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:03 crc kubenswrapper[4784]: I0123 08:00:03.326726 4784 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1dd6c6b0-eee4-4c2a-8b4c-873ca7128472-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:03 crc kubenswrapper[4784]: I0123 08:00:03.829818 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" event={"ID":"1dd6c6b0-eee4-4c2a-8b4c-873ca7128472","Type":"ContainerDied","Data":"9e19e0f98f279a78fdbbd4b793979bc49d5a174b5f94e213fc1d4b8d25952680"} Jan 23 08:00:03 crc kubenswrapper[4784]: I0123 08:00:03.829862 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-4lkxl" Jan 23 08:00:03 crc kubenswrapper[4784]: I0123 08:00:03.829877 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e19e0f98f279a78fdbbd4b793979bc49d5a174b5f94e213fc1d4b8d25952680" Jan 23 08:00:04 crc kubenswrapper[4784]: I0123 08:00:04.216902 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd"] Jan 23 08:00:04 crc kubenswrapper[4784]: I0123 08:00:04.228370 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485875-bkkqd"] Jan 23 08:00:05 crc kubenswrapper[4784]: I0123 08:00:05.264129 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b93e9b23-2b0e-4107-9a5b-74c94e40fc62" path="/var/lib/kubelet/pods/b93e9b23-2b0e-4107-9a5b-74c94e40fc62/volumes" Jan 23 08:00:05 crc kubenswrapper[4784]: I0123 08:00:05.503923 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp_71dd6098-21e1-4844-bf38-85ff115f9157/util/0.log" Jan 23 08:00:05 crc kubenswrapper[4784]: I0123 08:00:05.638743 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp_71dd6098-21e1-4844-bf38-85ff115f9157/util/0.log" Jan 23 08:00:05 crc kubenswrapper[4784]: I0123 08:00:05.658456 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp_71dd6098-21e1-4844-bf38-85ff115f9157/pull/0.log" Jan 23 08:00:05 crc kubenswrapper[4784]: I0123 08:00:05.673853 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp_71dd6098-21e1-4844-bf38-85ff115f9157/pull/0.log" Jan 23 08:00:05 crc kubenswrapper[4784]: I0123 08:00:05.846317 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp_71dd6098-21e1-4844-bf38-85ff115f9157/util/0.log" Jan 23 08:00:05 crc kubenswrapper[4784]: I0123 08:00:05.851012 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp_71dd6098-21e1-4844-bf38-85ff115f9157/pull/0.log" Jan 23 08:00:05 crc kubenswrapper[4784]: I0123 08:00:05.920308 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8458169c3f348dba18f74f2dbb8ae266c9b5bd7879f2ad6ad7a463a4c9wqthp_71dd6098-21e1-4844-bf38-85ff115f9157/extract/0.log" Jan 23 08:00:06 crc kubenswrapper[4784]: I0123 08:00:06.057969 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-q7sn8_0e01c35c-c9bd-4b02-adb1-be49a504ea54/manager/1.log" Jan 23 08:00:06 crc kubenswrapper[4784]: I0123 08:00:06.092967 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-q7sn8_0e01c35c-c9bd-4b02-adb1-be49a504ea54/manager/0.log" Jan 23 08:00:06 crc kubenswrapper[4784]: I0123 08:00:06.203928 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-kl6d5_7c5e978b-ac3c-439e-b2b1-ab025c130984/manager/1.log" Jan 23 08:00:06 crc kubenswrapper[4784]: I0123 08:00:06.308065 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-kl6d5_7c5e978b-ac3c-439e-b2b1-ab025c130984/manager/0.log" Jan 23 08:00:06 crc kubenswrapper[4784]: I0123 08:00:06.353175 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-zkswk_55f3492a-a5c0-460b-a93b-eb680b426a7c/manager/1.log" Jan 23 08:00:06 crc kubenswrapper[4784]: I0123 08:00:06.413517 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-zkswk_55f3492a-a5c0-460b-a93b-eb680b426a7c/manager/0.log" Jan 23 08:00:06 crc kubenswrapper[4784]: I0123 08:00:06.498916 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-nb6tb_f54aca80-78ad-4bda-905c-0a519a4f33ed/manager/1.log" Jan 23 08:00:06 crc kubenswrapper[4784]: I0123 08:00:06.583449 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-nb6tb_f54aca80-78ad-4bda-905c-0a519a4f33ed/manager/0.log" Jan 23 08:00:06 crc kubenswrapper[4784]: I0123 08:00:06.685394 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-hcqtn_417f228a-38b7-448a-980d-f64d6e113646/manager/0.log" Jan 23 08:00:06 crc kubenswrapper[4784]: I0123 08:00:06.705742 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-hcqtn_417f228a-38b7-448a-980d-f64d6e113646/manager/1.log" Jan 23 08:00:06 crc kubenswrapper[4784]: I0123 08:00:06.801651 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-lvmlf_4fa12cd4-f2bc-4863-8b67-e246a0becee3/manager/1.log" Jan 23 08:00:06 crc kubenswrapper[4784]: I0123 08:00:06.867316 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-lvmlf_4fa12cd4-f2bc-4863-8b67-e246a0becee3/manager/0.log" Jan 23 08:00:06 crc kubenswrapper[4784]: I0123 08:00:06.965282 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-hl8gk_758913f1-9ef1-4fe9-9d5f-2cb794fcddef/manager/1.log" Jan 23 08:00:07 crc kubenswrapper[4784]: I0123 08:00:07.104574 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-2vptn_89f228f9-5c69-4e48-bf35-01cc25b56ecd/manager/1.log" Jan 23 08:00:07 crc kubenswrapper[4784]: I0123 08:00:07.206429 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-2vptn_89f228f9-5c69-4e48-bf35-01cc25b56ecd/manager/0.log" Jan 23 08:00:07 crc kubenswrapper[4784]: I0123 08:00:07.300765 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-hl8gk_758913f1-9ef1-4fe9-9d5f-2cb794fcddef/manager/0.log" Jan 23 08:00:07 crc kubenswrapper[4784]: I0123 08:00:07.397120 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-7znp2_1cd86a7e-7738-4a67-9c19-d34a70dbc9fe/manager/1.log" Jan 23 08:00:07 crc kubenswrapper[4784]: I0123 08:00:07.465077 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-7znp2_1cd86a7e-7738-4a67-9c19-d34a70dbc9fe/manager/0.log" Jan 23 08:00:07 crc kubenswrapper[4784]: I0123 08:00:07.585256 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-wzjzl_138e85ae-26a7-45f3-ac25-61ece9cf8573/manager/1.log" Jan 23 08:00:07 crc kubenswrapper[4784]: I0123 08:00:07.646427 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-wzjzl_138e85ae-26a7-45f3-ac25-61ece9cf8573/manager/0.log" Jan 23 08:00:07 crc kubenswrapper[4784]: I0123 08:00:07.758064 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7_f809f5f2-7409-4d7e-b938-1efc34dc4c2f/manager/1.log" Jan 23 08:00:07 crc kubenswrapper[4784]: I0123 08:00:07.802881 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-mtxb7_f809f5f2-7409-4d7e-b938-1efc34dc4c2f/manager/0.log" Jan 23 08:00:07 crc kubenswrapper[4784]: I0123 08:00:07.901109 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-krp8w_be79eaa0-8040-4009-9f16-fcb56bffbff7/manager/1.log" Jan 23 08:00:07 crc kubenswrapper[4784]: I0123 08:00:07.974831 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-krp8w_be79eaa0-8040-4009-9f16-fcb56bffbff7/manager/0.log" Jan 23 08:00:08 crc kubenswrapper[4784]: I0123 08:00:08.050483 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-82hzn_9bc11b97-7610-4c0f-898a-bb42b42c37d7/manager/1.log" Jan 23 08:00:08 crc kubenswrapper[4784]: I0123 08:00:08.179944 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-82hzn_9bc11b97-7610-4c0f-898a-bb42b42c37d7/manager/0.log" Jan 23 08:00:08 crc kubenswrapper[4784]: I0123 08:00:08.203239 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-jqhrt_89a376c8-b238-445d-99da-b85f3c421125/manager/1.log" Jan 23 08:00:08 crc kubenswrapper[4784]: I0123 08:00:08.280936 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-jqhrt_89a376c8-b238-445d-99da-b85f3c421125/manager/0.log" Jan 23 08:00:08 crc kubenswrapper[4784]: I0123 08:00:08.360207 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6_500659da-123f-4500-9c50-2b7b3b7656df/manager/1.log" Jan 23 08:00:08 crc kubenswrapper[4784]: I0123 08:00:08.439859 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854xdgx6_500659da-123f-4500-9c50-2b7b3b7656df/manager/0.log" Jan 23 08:00:08 crc kubenswrapper[4784]: I0123 08:00:08.599739 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7c664964d9-t6kpc_be839066-996a-463b-b96c-a340d4e55ffd/operator/1.log" Jan 23 08:00:08 crc kubenswrapper[4784]: I0123 08:00:08.715367 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7c664964d9-t6kpc_be839066-996a-463b-b96c-a340d4e55ffd/operator/0.log" Jan 23 08:00:08 crc kubenswrapper[4784]: I0123 08:00:08.751284 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cccd889d5-jxhkn_409eb30c-947e-4d15-9b7c-8a73ba35ad70/manager/1.log" Jan 23 08:00:08 crc kubenswrapper[4784]: I0123 08:00:08.939833 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-q72fk_6a5cd19e-60f5-431b-87fe-4eb262ca0f2e/registry-server/0.log" Jan 23 08:00:09 crc kubenswrapper[4784]: I0123 08:00:09.072300 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-2wrsg_e8de3214-d1e9-4800-9ace-51a85b326df8/manager/1.log" Jan 23 08:00:09 crc kubenswrapper[4784]: I0123 08:00:09.163429 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-c2btv_2c2a2d81-11ef-4146-ad50-8f7f39163253/manager/1.log" Jan 23 08:00:09 crc kubenswrapper[4784]: I0123 08:00:09.192606 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-2wrsg_e8de3214-d1e9-4800-9ace-51a85b326df8/manager/0.log" Jan 23 08:00:09 crc kubenswrapper[4784]: I0123 08:00:09.289200 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-c2btv_2c2a2d81-11ef-4146-ad50-8f7f39163253/manager/0.log" Jan 23 08:00:09 crc kubenswrapper[4784]: I0123 08:00:09.367293 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5dsxt_80f7466e-7d6a-4416-9259-c30d69ee725e/operator/1.log" Jan 23 08:00:09 crc kubenswrapper[4784]: I0123 08:00:09.440186 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5dsxt_80f7466e-7d6a-4416-9259-c30d69ee725e/operator/0.log" Jan 23 08:00:09 crc kubenswrapper[4784]: I0123 08:00:09.565580 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-gbncb_d6c01b10-21b9-4e8b-b051-6f148f468828/manager/1.log" Jan 23 08:00:09 crc kubenswrapper[4784]: I0123 08:00:09.610148 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-gbncb_d6c01b10-21b9-4e8b-b051-6f148f468828/manager/0.log" Jan 23 08:00:09 crc kubenswrapper[4784]: I0123 08:00:09.814588 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-c2zh7_3a006f0b-6298-4509-9533-178b38906875/manager/1.log" Jan 23 08:00:10 crc kubenswrapper[4784]: I0123 08:00:10.087937 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-94f28_3b13bce8-a43d-4833-9472-81f048a95be3/manager/1.log" Jan 23 08:00:10 crc kubenswrapper[4784]: I0123 08:00:10.124316 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-94f28_3b13bce8-a43d-4833-9472-81f048a95be3/manager/0.log" Jan 23 08:00:10 crc kubenswrapper[4784]: I0123 08:00:10.129489 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-c2zh7_3a006f0b-6298-4509-9533-178b38906875/manager/0.log" Jan 23 08:00:10 crc kubenswrapper[4784]: I0123 08:00:10.208247 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cccd889d5-jxhkn_409eb30c-947e-4d15-9b7c-8a73ba35ad70/manager/0.log" Jan 23 08:00:10 crc kubenswrapper[4784]: I0123 08:00:10.264583 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5b5d4f4b97-64mxt_2e269fdb-0502-4d62-9a0d-15094fdd942c/manager/1.log" Jan 23 08:00:10 crc kubenswrapper[4784]: I0123 08:00:10.356072 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5b5d4f4b97-64mxt_2e269fdb-0502-4d62-9a0d-15094fdd942c/manager/0.log" Jan 23 08:00:12 crc kubenswrapper[4784]: I0123 08:00:12.254064 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 08:00:12 crc kubenswrapper[4784]: E0123 08:00:12.254495 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:00:25 crc kubenswrapper[4784]: I0123 08:00:25.258673 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 08:00:25 crc kubenswrapper[4784]: E0123 08:00:25.259592 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:00:30 crc kubenswrapper[4784]: I0123 08:00:30.960139 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-czr5t_75a8ac2c-f286-499e-9faa-03f25bc7f579/control-plane-machine-set-operator/0.log" Jan 23 08:00:31 crc kubenswrapper[4784]: I0123 08:00:31.121593 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ltcmm_ba69339a-1102-4a25-ae4e-a70b643e6ff1/kube-rbac-proxy/0.log" Jan 23 08:00:31 crc kubenswrapper[4784]: I0123 08:00:31.180561 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ltcmm_ba69339a-1102-4a25-ae4e-a70b643e6ff1/machine-api-operator/0.log" Jan 23 08:00:37 crc kubenswrapper[4784]: I0123 08:00:37.273682 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 08:00:37 crc kubenswrapper[4784]: E0123 08:00:37.276004 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:00:42 crc kubenswrapper[4784]: I0123 08:00:42.954692 4784 scope.go:117] "RemoveContainer" containerID="35d9370bf55163a1a28a3e46db6a38da12559a2e1ceca61745d8d494b57c9947" Jan 23 08:00:45 crc kubenswrapper[4784]: I0123 08:00:45.366680 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dgb58"] Jan 23 08:00:45 crc kubenswrapper[4784]: E0123 08:00:45.367446 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd6c6b0-eee4-4c2a-8b4c-873ca7128472" containerName="collect-profiles" Jan 23 08:00:45 crc kubenswrapper[4784]: I0123 08:00:45.367462 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd6c6b0-eee4-4c2a-8b4c-873ca7128472" containerName="collect-profiles" Jan 23 08:00:45 crc kubenswrapper[4784]: I0123 08:00:45.371867 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd6c6b0-eee4-4c2a-8b4c-873ca7128472" containerName="collect-profiles" Jan 23 08:00:45 crc kubenswrapper[4784]: I0123 08:00:45.373841 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:45 crc kubenswrapper[4784]: I0123 08:00:45.377909 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dgb58"] Jan 23 08:00:45 crc kubenswrapper[4784]: I0123 08:00:45.569301 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056bb973-0323-4906-a1cd-99de64ee54e8-utilities\") pod \"community-operators-dgb58\" (UID: \"056bb973-0323-4906-a1cd-99de64ee54e8\") " pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:45 crc kubenswrapper[4784]: I0123 08:00:45.569428 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8rnq\" (UniqueName: \"kubernetes.io/projected/056bb973-0323-4906-a1cd-99de64ee54e8-kube-api-access-j8rnq\") pod \"community-operators-dgb58\" (UID: \"056bb973-0323-4906-a1cd-99de64ee54e8\") " pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:45 crc kubenswrapper[4784]: I0123 08:00:45.569486 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056bb973-0323-4906-a1cd-99de64ee54e8-catalog-content\") pod \"community-operators-dgb58\" (UID: \"056bb973-0323-4906-a1cd-99de64ee54e8\") " pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:45 crc kubenswrapper[4784]: I0123 08:00:45.671727 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056bb973-0323-4906-a1cd-99de64ee54e8-utilities\") pod \"community-operators-dgb58\" (UID: \"056bb973-0323-4906-a1cd-99de64ee54e8\") " pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:45 crc kubenswrapper[4784]: I0123 08:00:45.671902 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8rnq\" (UniqueName: \"kubernetes.io/projected/056bb973-0323-4906-a1cd-99de64ee54e8-kube-api-access-j8rnq\") pod \"community-operators-dgb58\" (UID: \"056bb973-0323-4906-a1cd-99de64ee54e8\") " pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:45 crc kubenswrapper[4784]: I0123 08:00:45.671981 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056bb973-0323-4906-a1cd-99de64ee54e8-catalog-content\") pod \"community-operators-dgb58\" (UID: \"056bb973-0323-4906-a1cd-99de64ee54e8\") " pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:45 crc kubenswrapper[4784]: I0123 08:00:45.672214 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056bb973-0323-4906-a1cd-99de64ee54e8-utilities\") pod \"community-operators-dgb58\" (UID: \"056bb973-0323-4906-a1cd-99de64ee54e8\") " pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:45 crc kubenswrapper[4784]: I0123 08:00:45.672434 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056bb973-0323-4906-a1cd-99de64ee54e8-catalog-content\") pod \"community-operators-dgb58\" (UID: \"056bb973-0323-4906-a1cd-99de64ee54e8\") " pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:45 crc kubenswrapper[4784]: I0123 08:00:45.693408 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8rnq\" (UniqueName: \"kubernetes.io/projected/056bb973-0323-4906-a1cd-99de64ee54e8-kube-api-access-j8rnq\") pod \"community-operators-dgb58\" (UID: \"056bb973-0323-4906-a1cd-99de64ee54e8\") " pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:45 crc kubenswrapper[4784]: I0123 08:00:45.992638 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:46 crc kubenswrapper[4784]: I0123 08:00:46.483549 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dgb58"] Jan 23 08:00:46 crc kubenswrapper[4784]: W0123 08:00:46.485635 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod056bb973_0323_4906_a1cd_99de64ee54e8.slice/crio-0d2c615eef40796124d48bc6846c1f0df67c8a6196e58998e8d66e77d92e19cd WatchSource:0}: Error finding container 0d2c615eef40796124d48bc6846c1f0df67c8a6196e58998e8d66e77d92e19cd: Status 404 returned error can't find the container with id 0d2c615eef40796124d48bc6846c1f0df67c8a6196e58998e8d66e77d92e19cd Jan 23 08:00:46 crc kubenswrapper[4784]: I0123 08:00:46.913276 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-dtn5v_e6bccf31-7461-4999-9fdf-b6f2a17b50c4/cert-manager-controller/0.log" Jan 23 08:00:47 crc kubenswrapper[4784]: I0123 08:00:47.012615 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-tmn9t_5528c51f-4fc7-4a52-9e8d-f38af10c6874/cert-manager-cainjector/0.log" Jan 23 08:00:47 crc kubenswrapper[4784]: I0123 08:00:47.108159 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-q626v_4c2ff224-fd79-4b3d-8bc7-95199aec7841/cert-manager-webhook/0.log" Jan 23 08:00:47 crc kubenswrapper[4784]: I0123 08:00:47.312403 4784 generic.go:334] "Generic (PLEG): container finished" podID="056bb973-0323-4906-a1cd-99de64ee54e8" containerID="ab084f4b0f77901a5860a1ee760bfd74339a83f5ed8cd82eb4b5788a97fc78f8" exitCode=0 Jan 23 08:00:47 crc kubenswrapper[4784]: I0123 08:00:47.312588 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgb58" event={"ID":"056bb973-0323-4906-a1cd-99de64ee54e8","Type":"ContainerDied","Data":"ab084f4b0f77901a5860a1ee760bfd74339a83f5ed8cd82eb4b5788a97fc78f8"} Jan 23 08:00:47 crc kubenswrapper[4784]: I0123 08:00:47.312730 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgb58" event={"ID":"056bb973-0323-4906-a1cd-99de64ee54e8","Type":"ContainerStarted","Data":"0d2c615eef40796124d48bc6846c1f0df67c8a6196e58998e8d66e77d92e19cd"} Jan 23 08:00:49 crc kubenswrapper[4784]: I0123 08:00:49.253971 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 08:00:49 crc kubenswrapper[4784]: E0123 08:00:49.255367 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:00:50 crc kubenswrapper[4784]: I0123 08:00:50.341141 4784 generic.go:334] "Generic (PLEG): container finished" podID="056bb973-0323-4906-a1cd-99de64ee54e8" containerID="9d9c71b64c491a6ad8f4b0a81d366e94d08dcae0354ecf8974e0f916222016a1" exitCode=0 Jan 23 08:00:50 crc kubenswrapper[4784]: I0123 08:00:50.341300 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgb58" event={"ID":"056bb973-0323-4906-a1cd-99de64ee54e8","Type":"ContainerDied","Data":"9d9c71b64c491a6ad8f4b0a81d366e94d08dcae0354ecf8974e0f916222016a1"} Jan 23 08:00:51 crc kubenswrapper[4784]: I0123 08:00:51.354729 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgb58" event={"ID":"056bb973-0323-4906-a1cd-99de64ee54e8","Type":"ContainerStarted","Data":"338e6e778bad8dc3aa9b1166e58c1177d7855c4751b987312052b3198c1b0df1"} Jan 23 08:00:51 crc kubenswrapper[4784]: I0123 08:00:51.380289 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dgb58" podStartSLOduration=2.8977221269999998 podStartE2EDuration="6.380263809s" podCreationTimestamp="2026-01-23 08:00:45 +0000 UTC" firstStartedPulling="2026-01-23 08:00:47.314386934 +0000 UTC m=+6050.546894908" lastFinishedPulling="2026-01-23 08:00:50.796928596 +0000 UTC m=+6054.029436590" observedRunningTime="2026-01-23 08:00:51.374621499 +0000 UTC m=+6054.607129483" watchObservedRunningTime="2026-01-23 08:00:51.380263809 +0000 UTC m=+6054.612771783" Jan 23 08:00:55 crc kubenswrapper[4784]: I0123 08:00:55.993136 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:55 crc kubenswrapper[4784]: I0123 08:00:55.994507 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:56 crc kubenswrapper[4784]: I0123 08:00:56.067369 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:56 crc kubenswrapper[4784]: I0123 08:00:56.446557 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:56 crc kubenswrapper[4784]: I0123 08:00:56.497684 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dgb58"] Jan 23 08:00:58 crc kubenswrapper[4784]: I0123 08:00:58.417881 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dgb58" podUID="056bb973-0323-4906-a1cd-99de64ee54e8" containerName="registry-server" containerID="cri-o://338e6e778bad8dc3aa9b1166e58c1177d7855c4751b987312052b3198c1b0df1" gracePeriod=2 Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.414242 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.429296 4784 generic.go:334] "Generic (PLEG): container finished" podID="056bb973-0323-4906-a1cd-99de64ee54e8" containerID="338e6e778bad8dc3aa9b1166e58c1177d7855c4751b987312052b3198c1b0df1" exitCode=0 Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.429346 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgb58" event={"ID":"056bb973-0323-4906-a1cd-99de64ee54e8","Type":"ContainerDied","Data":"338e6e778bad8dc3aa9b1166e58c1177d7855c4751b987312052b3198c1b0df1"} Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.429380 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dgb58" event={"ID":"056bb973-0323-4906-a1cd-99de64ee54e8","Type":"ContainerDied","Data":"0d2c615eef40796124d48bc6846c1f0df67c8a6196e58998e8d66e77d92e19cd"} Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.429431 4784 scope.go:117] "RemoveContainer" containerID="338e6e778bad8dc3aa9b1166e58c1177d7855c4751b987312052b3198c1b0df1" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.429583 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dgb58" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.470108 4784 scope.go:117] "RemoveContainer" containerID="9d9c71b64c491a6ad8f4b0a81d366e94d08dcae0354ecf8974e0f916222016a1" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.493915 4784 scope.go:117] "RemoveContainer" containerID="ab084f4b0f77901a5860a1ee760bfd74339a83f5ed8cd82eb4b5788a97fc78f8" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.540730 4784 scope.go:117] "RemoveContainer" containerID="338e6e778bad8dc3aa9b1166e58c1177d7855c4751b987312052b3198c1b0df1" Jan 23 08:00:59 crc kubenswrapper[4784]: E0123 08:00:59.541825 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"338e6e778bad8dc3aa9b1166e58c1177d7855c4751b987312052b3198c1b0df1\": container with ID starting with 338e6e778bad8dc3aa9b1166e58c1177d7855c4751b987312052b3198c1b0df1 not found: ID does not exist" containerID="338e6e778bad8dc3aa9b1166e58c1177d7855c4751b987312052b3198c1b0df1" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.541885 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"338e6e778bad8dc3aa9b1166e58c1177d7855c4751b987312052b3198c1b0df1"} err="failed to get container status \"338e6e778bad8dc3aa9b1166e58c1177d7855c4751b987312052b3198c1b0df1\": rpc error: code = NotFound desc = could not find container \"338e6e778bad8dc3aa9b1166e58c1177d7855c4751b987312052b3198c1b0df1\": container with ID starting with 338e6e778bad8dc3aa9b1166e58c1177d7855c4751b987312052b3198c1b0df1 not found: ID does not exist" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.541956 4784 scope.go:117] "RemoveContainer" containerID="9d9c71b64c491a6ad8f4b0a81d366e94d08dcae0354ecf8974e0f916222016a1" Jan 23 08:00:59 crc kubenswrapper[4784]: E0123 08:00:59.542455 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d9c71b64c491a6ad8f4b0a81d366e94d08dcae0354ecf8974e0f916222016a1\": container with ID starting with 9d9c71b64c491a6ad8f4b0a81d366e94d08dcae0354ecf8974e0f916222016a1 not found: ID does not exist" containerID="9d9c71b64c491a6ad8f4b0a81d366e94d08dcae0354ecf8974e0f916222016a1" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.542501 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d9c71b64c491a6ad8f4b0a81d366e94d08dcae0354ecf8974e0f916222016a1"} err="failed to get container status \"9d9c71b64c491a6ad8f4b0a81d366e94d08dcae0354ecf8974e0f916222016a1\": rpc error: code = NotFound desc = could not find container \"9d9c71b64c491a6ad8f4b0a81d366e94d08dcae0354ecf8974e0f916222016a1\": container with ID starting with 9d9c71b64c491a6ad8f4b0a81d366e94d08dcae0354ecf8974e0f916222016a1 not found: ID does not exist" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.542529 4784 scope.go:117] "RemoveContainer" containerID="ab084f4b0f77901a5860a1ee760bfd74339a83f5ed8cd82eb4b5788a97fc78f8" Jan 23 08:00:59 crc kubenswrapper[4784]: E0123 08:00:59.542909 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab084f4b0f77901a5860a1ee760bfd74339a83f5ed8cd82eb4b5788a97fc78f8\": container with ID starting with ab084f4b0f77901a5860a1ee760bfd74339a83f5ed8cd82eb4b5788a97fc78f8 not found: ID does not exist" containerID="ab084f4b0f77901a5860a1ee760bfd74339a83f5ed8cd82eb4b5788a97fc78f8" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.542948 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab084f4b0f77901a5860a1ee760bfd74339a83f5ed8cd82eb4b5788a97fc78f8"} err="failed to get container status \"ab084f4b0f77901a5860a1ee760bfd74339a83f5ed8cd82eb4b5788a97fc78f8\": rpc error: code = NotFound desc = could not find container \"ab084f4b0f77901a5860a1ee760bfd74339a83f5ed8cd82eb4b5788a97fc78f8\": container with ID starting with ab084f4b0f77901a5860a1ee760bfd74339a83f5ed8cd82eb4b5788a97fc78f8 not found: ID does not exist" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.601498 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8rnq\" (UniqueName: \"kubernetes.io/projected/056bb973-0323-4906-a1cd-99de64ee54e8-kube-api-access-j8rnq\") pod \"056bb973-0323-4906-a1cd-99de64ee54e8\" (UID: \"056bb973-0323-4906-a1cd-99de64ee54e8\") " Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.601599 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056bb973-0323-4906-a1cd-99de64ee54e8-catalog-content\") pod \"056bb973-0323-4906-a1cd-99de64ee54e8\" (UID: \"056bb973-0323-4906-a1cd-99de64ee54e8\") " Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.601807 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056bb973-0323-4906-a1cd-99de64ee54e8-utilities\") pod \"056bb973-0323-4906-a1cd-99de64ee54e8\" (UID: \"056bb973-0323-4906-a1cd-99de64ee54e8\") " Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.608724 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/056bb973-0323-4906-a1cd-99de64ee54e8-utilities" (OuterVolumeSpecName: "utilities") pod "056bb973-0323-4906-a1cd-99de64ee54e8" (UID: "056bb973-0323-4906-a1cd-99de64ee54e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.612037 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056bb973-0323-4906-a1cd-99de64ee54e8-kube-api-access-j8rnq" (OuterVolumeSpecName: "kube-api-access-j8rnq") pod "056bb973-0323-4906-a1cd-99de64ee54e8" (UID: "056bb973-0323-4906-a1cd-99de64ee54e8"). InnerVolumeSpecName "kube-api-access-j8rnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.667451 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/056bb973-0323-4906-a1cd-99de64ee54e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "056bb973-0323-4906-a1cd-99de64ee54e8" (UID: "056bb973-0323-4906-a1cd-99de64ee54e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.704070 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8rnq\" (UniqueName: \"kubernetes.io/projected/056bb973-0323-4906-a1cd-99de64ee54e8-kube-api-access-j8rnq\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.704111 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/056bb973-0323-4906-a1cd-99de64ee54e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.704122 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/056bb973-0323-4906-a1cd-99de64ee54e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.770229 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dgb58"] Jan 23 08:00:59 crc kubenswrapper[4784]: I0123 08:00:59.783471 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dgb58"] Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.153480 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29485921-qnsx6"] Jan 23 08:01:00 crc kubenswrapper[4784]: E0123 08:01:00.153987 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056bb973-0323-4906-a1cd-99de64ee54e8" containerName="extract-content" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.154023 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="056bb973-0323-4906-a1cd-99de64ee54e8" containerName="extract-content" Jan 23 08:01:00 crc kubenswrapper[4784]: E0123 08:01:00.154079 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056bb973-0323-4906-a1cd-99de64ee54e8" containerName="extract-utilities" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.154087 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="056bb973-0323-4906-a1cd-99de64ee54e8" containerName="extract-utilities" Jan 23 08:01:00 crc kubenswrapper[4784]: E0123 08:01:00.154102 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056bb973-0323-4906-a1cd-99de64ee54e8" containerName="registry-server" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.154111 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="056bb973-0323-4906-a1cd-99de64ee54e8" containerName="registry-server" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.154349 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="056bb973-0323-4906-a1cd-99de64ee54e8" containerName="registry-server" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.155173 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.195786 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29485921-qnsx6"] Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.315057 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-config-data\") pod \"keystone-cron-29485921-qnsx6\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.315223 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-combined-ca-bundle\") pod \"keystone-cron-29485921-qnsx6\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.315252 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-fernet-keys\") pod \"keystone-cron-29485921-qnsx6\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.315303 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf6h6\" (UniqueName: \"kubernetes.io/projected/ee227024-6845-4f5f-aac4-a9801fb72cdf-kube-api-access-jf6h6\") pod \"keystone-cron-29485921-qnsx6\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.417565 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-combined-ca-bundle\") pod \"keystone-cron-29485921-qnsx6\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.417627 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-fernet-keys\") pod \"keystone-cron-29485921-qnsx6\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.417694 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf6h6\" (UniqueName: \"kubernetes.io/projected/ee227024-6845-4f5f-aac4-a9801fb72cdf-kube-api-access-jf6h6\") pod \"keystone-cron-29485921-qnsx6\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.417801 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-config-data\") pod \"keystone-cron-29485921-qnsx6\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.421291 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-fernet-keys\") pod \"keystone-cron-29485921-qnsx6\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.422256 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-combined-ca-bundle\") pod \"keystone-cron-29485921-qnsx6\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.437138 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-config-data\") pod \"keystone-cron-29485921-qnsx6\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.437423 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf6h6\" (UniqueName: \"kubernetes.io/projected/ee227024-6845-4f5f-aac4-a9801fb72cdf-kube-api-access-jf6h6\") pod \"keystone-cron-29485921-qnsx6\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.476951 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:00 crc kubenswrapper[4784]: W0123 08:01:00.995857 4784 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee227024_6845_4f5f_aac4_a9801fb72cdf.slice/crio-041a985343da1e00727a3ee023aea434bc28755f2fe35263194c3897f7ce08d8 WatchSource:0}: Error finding container 041a985343da1e00727a3ee023aea434bc28755f2fe35263194c3897f7ce08d8: Status 404 returned error can't find the container with id 041a985343da1e00727a3ee023aea434bc28755f2fe35263194c3897f7ce08d8 Jan 23 08:01:00 crc kubenswrapper[4784]: I0123 08:01:00.997525 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29485921-qnsx6"] Jan 23 08:01:01 crc kubenswrapper[4784]: I0123 08:01:01.253820 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 08:01:01 crc kubenswrapper[4784]: E0123 08:01:01.254064 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:01:01 crc kubenswrapper[4784]: I0123 08:01:01.280112 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056bb973-0323-4906-a1cd-99de64ee54e8" path="/var/lib/kubelet/pods/056bb973-0323-4906-a1cd-99de64ee54e8/volumes" Jan 23 08:01:01 crc kubenswrapper[4784]: I0123 08:01:01.460522 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485921-qnsx6" event={"ID":"ee227024-6845-4f5f-aac4-a9801fb72cdf","Type":"ContainerStarted","Data":"041a985343da1e00727a3ee023aea434bc28755f2fe35263194c3897f7ce08d8"} Jan 23 08:01:01 crc kubenswrapper[4784]: I0123 08:01:01.806896 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-5tb77_171baadd-8608-486e-a418-65f76de1cf06/nmstate-console-plugin/0.log" Jan 23 08:01:01 crc kubenswrapper[4784]: I0123 08:01:01.915645 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-m77xg_40a88789-2452-42ca-9b44-14a6614b413c/nmstate-handler/0.log" Jan 23 08:01:01 crc kubenswrapper[4784]: I0123 08:01:01.997480 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-kw56f_69d35b69-1071-41f1-ba7c-37f25670f4cb/kube-rbac-proxy/0.log" Jan 23 08:01:02 crc kubenswrapper[4784]: I0123 08:01:02.063652 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-kw56f_69d35b69-1071-41f1-ba7c-37f25670f4cb/nmstate-metrics/0.log" Jan 23 08:01:02 crc kubenswrapper[4784]: I0123 08:01:02.167211 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-r4rm7_58e69dbd-d9f9-48a5-8600-e14bda89ab89/nmstate-operator/0.log" Jan 23 08:01:02 crc kubenswrapper[4784]: I0123 08:01:02.268411 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-cjbl9_21b9c71e-8dc5-41c7-86a3-9d840f155413/nmstate-webhook/0.log" Jan 23 08:01:02 crc kubenswrapper[4784]: I0123 08:01:02.470517 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485921-qnsx6" event={"ID":"ee227024-6845-4f5f-aac4-a9801fb72cdf","Type":"ContainerStarted","Data":"a8e2319d599fd4de05f810bacac3b3978fb4844c8b40f165b1ddd5006cf41cac"} Jan 23 08:01:02 crc kubenswrapper[4784]: I0123 08:01:02.489869 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29485921-qnsx6" podStartSLOduration=2.489838402 podStartE2EDuration="2.489838402s" podCreationTimestamp="2026-01-23 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:01:02.48713729 +0000 UTC m=+6065.719645254" watchObservedRunningTime="2026-01-23 08:01:02.489838402 +0000 UTC m=+6065.722346376" Jan 23 08:01:06 crc kubenswrapper[4784]: I0123 08:01:06.508998 4784 generic.go:334] "Generic (PLEG): container finished" podID="ee227024-6845-4f5f-aac4-a9801fb72cdf" containerID="a8e2319d599fd4de05f810bacac3b3978fb4844c8b40f165b1ddd5006cf41cac" exitCode=0 Jan 23 08:01:06 crc kubenswrapper[4784]: I0123 08:01:06.509046 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485921-qnsx6" event={"ID":"ee227024-6845-4f5f-aac4-a9801fb72cdf","Type":"ContainerDied","Data":"a8e2319d599fd4de05f810bacac3b3978fb4844c8b40f165b1ddd5006cf41cac"} Jan 23 08:01:07 crc kubenswrapper[4784]: I0123 08:01:07.877034 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:07 crc kubenswrapper[4784]: I0123 08:01:07.977825 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-config-data\") pod \"ee227024-6845-4f5f-aac4-a9801fb72cdf\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " Jan 23 08:01:07 crc kubenswrapper[4784]: I0123 08:01:07.977869 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-fernet-keys\") pod \"ee227024-6845-4f5f-aac4-a9801fb72cdf\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " Jan 23 08:01:07 crc kubenswrapper[4784]: I0123 08:01:07.977988 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf6h6\" (UniqueName: \"kubernetes.io/projected/ee227024-6845-4f5f-aac4-a9801fb72cdf-kube-api-access-jf6h6\") pod \"ee227024-6845-4f5f-aac4-a9801fb72cdf\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " Jan 23 08:01:07 crc kubenswrapper[4784]: I0123 08:01:07.978013 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-combined-ca-bundle\") pod \"ee227024-6845-4f5f-aac4-a9801fb72cdf\" (UID: \"ee227024-6845-4f5f-aac4-a9801fb72cdf\") " Jan 23 08:01:07 crc kubenswrapper[4784]: I0123 08:01:07.986323 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ee227024-6845-4f5f-aac4-a9801fb72cdf" (UID: "ee227024-6845-4f5f-aac4-a9801fb72cdf"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:01:07 crc kubenswrapper[4784]: I0123 08:01:07.986801 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee227024-6845-4f5f-aac4-a9801fb72cdf-kube-api-access-jf6h6" (OuterVolumeSpecName: "kube-api-access-jf6h6") pod "ee227024-6845-4f5f-aac4-a9801fb72cdf" (UID: "ee227024-6845-4f5f-aac4-a9801fb72cdf"). InnerVolumeSpecName "kube-api-access-jf6h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:01:08 crc kubenswrapper[4784]: I0123 08:01:08.017568 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee227024-6845-4f5f-aac4-a9801fb72cdf" (UID: "ee227024-6845-4f5f-aac4-a9801fb72cdf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:01:08 crc kubenswrapper[4784]: I0123 08:01:08.049355 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-config-data" (OuterVolumeSpecName: "config-data") pod "ee227024-6845-4f5f-aac4-a9801fb72cdf" (UID: "ee227024-6845-4f5f-aac4-a9801fb72cdf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:01:08 crc kubenswrapper[4784]: I0123 08:01:08.079837 4784 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:01:08 crc kubenswrapper[4784]: I0123 08:01:08.079898 4784 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 08:01:08 crc kubenswrapper[4784]: I0123 08:01:08.079915 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf6h6\" (UniqueName: \"kubernetes.io/projected/ee227024-6845-4f5f-aac4-a9801fb72cdf-kube-api-access-jf6h6\") on node \"crc\" DevicePath \"\"" Jan 23 08:01:08 crc kubenswrapper[4784]: I0123 08:01:08.079929 4784 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee227024-6845-4f5f-aac4-a9801fb72cdf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:01:08 crc kubenswrapper[4784]: I0123 08:01:08.536535 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485921-qnsx6" event={"ID":"ee227024-6845-4f5f-aac4-a9801fb72cdf","Type":"ContainerDied","Data":"041a985343da1e00727a3ee023aea434bc28755f2fe35263194c3897f7ce08d8"} Jan 23 08:01:08 crc kubenswrapper[4784]: I0123 08:01:08.536571 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="041a985343da1e00727a3ee023aea434bc28755f2fe35263194c3897f7ce08d8" Jan 23 08:01:08 crc kubenswrapper[4784]: I0123 08:01:08.536642 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485921-qnsx6" Jan 23 08:01:14 crc kubenswrapper[4784]: I0123 08:01:14.253932 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 08:01:14 crc kubenswrapper[4784]: E0123 08:01:14.254813 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:01:16 crc kubenswrapper[4784]: I0123 08:01:16.132392 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-xcvs9_19b07fe7-1025-43fc-a462-4aaef0fe9833/prometheus-operator/0.log" Jan 23 08:01:16 crc kubenswrapper[4784]: I0123 08:01:16.335698 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc_6031f0e9-6391-4a28-8473-a458ec564ad6/prometheus-operator-admission-webhook/0.log" Jan 23 08:01:16 crc kubenswrapper[4784]: I0123 08:01:16.377267 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf_7cb0c63a-cabc-45c2-84b7-54ae314e802d/prometheus-operator-admission-webhook/0.log" Jan 23 08:01:16 crc kubenswrapper[4784]: I0123 08:01:16.620817 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-2gwnx_1e753c42-8648-4fb6-afeb-6cb5218b1e37/operator/0.log" Jan 23 08:01:16 crc kubenswrapper[4784]: I0123 08:01:16.650472 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-k9d2w_d5f9d7a6-d264-4964-8476-a72023915b07/perses-operator/0.log" Jan 23 08:01:27 crc kubenswrapper[4784]: I0123 08:01:27.262920 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 08:01:27 crc kubenswrapper[4784]: E0123 08:01:27.263867 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:01:28 crc kubenswrapper[4784]: I0123 08:01:28.452958 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vhtsq"] Jan 23 08:01:28 crc kubenswrapper[4784]: E0123 08:01:28.453658 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee227024-6845-4f5f-aac4-a9801fb72cdf" containerName="keystone-cron" Jan 23 08:01:28 crc kubenswrapper[4784]: I0123 08:01:28.453674 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee227024-6845-4f5f-aac4-a9801fb72cdf" containerName="keystone-cron" Jan 23 08:01:28 crc kubenswrapper[4784]: I0123 08:01:28.461983 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee227024-6845-4f5f-aac4-a9801fb72cdf" containerName="keystone-cron" Jan 23 08:01:28 crc kubenswrapper[4784]: I0123 08:01:28.463814 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:28 crc kubenswrapper[4784]: I0123 08:01:28.468675 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vhtsq"] Jan 23 08:01:28 crc kubenswrapper[4784]: I0123 08:01:28.612489 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc1a539d-9729-4045-b15e-750c41e6cd0a-utilities\") pod \"certified-operators-vhtsq\" (UID: \"dc1a539d-9729-4045-b15e-750c41e6cd0a\") " pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:28 crc kubenswrapper[4784]: I0123 08:01:28.612564 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzltw\" (UniqueName: \"kubernetes.io/projected/dc1a539d-9729-4045-b15e-750c41e6cd0a-kube-api-access-qzltw\") pod \"certified-operators-vhtsq\" (UID: \"dc1a539d-9729-4045-b15e-750c41e6cd0a\") " pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:28 crc kubenswrapper[4784]: I0123 08:01:28.612797 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc1a539d-9729-4045-b15e-750c41e6cd0a-catalog-content\") pod \"certified-operators-vhtsq\" (UID: \"dc1a539d-9729-4045-b15e-750c41e6cd0a\") " pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:28 crc kubenswrapper[4784]: I0123 08:01:28.714569 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc1a539d-9729-4045-b15e-750c41e6cd0a-catalog-content\") pod \"certified-operators-vhtsq\" (UID: \"dc1a539d-9729-4045-b15e-750c41e6cd0a\") " pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:28 crc kubenswrapper[4784]: I0123 08:01:28.714719 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc1a539d-9729-4045-b15e-750c41e6cd0a-utilities\") pod \"certified-operators-vhtsq\" (UID: \"dc1a539d-9729-4045-b15e-750c41e6cd0a\") " pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:28 crc kubenswrapper[4784]: I0123 08:01:28.714823 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzltw\" (UniqueName: \"kubernetes.io/projected/dc1a539d-9729-4045-b15e-750c41e6cd0a-kube-api-access-qzltw\") pod \"certified-operators-vhtsq\" (UID: \"dc1a539d-9729-4045-b15e-750c41e6cd0a\") " pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:28 crc kubenswrapper[4784]: I0123 08:01:28.715307 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc1a539d-9729-4045-b15e-750c41e6cd0a-catalog-content\") pod \"certified-operators-vhtsq\" (UID: \"dc1a539d-9729-4045-b15e-750c41e6cd0a\") " pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:28 crc kubenswrapper[4784]: I0123 08:01:28.715340 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc1a539d-9729-4045-b15e-750c41e6cd0a-utilities\") pod \"certified-operators-vhtsq\" (UID: \"dc1a539d-9729-4045-b15e-750c41e6cd0a\") " pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:28 crc kubenswrapper[4784]: I0123 08:01:28.748543 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzltw\" (UniqueName: \"kubernetes.io/projected/dc1a539d-9729-4045-b15e-750c41e6cd0a-kube-api-access-qzltw\") pod \"certified-operators-vhtsq\" (UID: \"dc1a539d-9729-4045-b15e-750c41e6cd0a\") " pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:28 crc kubenswrapper[4784]: I0123 08:01:28.792363 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:29 crc kubenswrapper[4784]: I0123 08:01:29.397377 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vhtsq"] Jan 23 08:01:29 crc kubenswrapper[4784]: I0123 08:01:29.788457 4784 generic.go:334] "Generic (PLEG): container finished" podID="dc1a539d-9729-4045-b15e-750c41e6cd0a" containerID="cbe9c9e22cf05b2583caa598b3be52ee039b9307cd7bb872ec686dd7f7545533" exitCode=0 Jan 23 08:01:29 crc kubenswrapper[4784]: I0123 08:01:29.788501 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhtsq" event={"ID":"dc1a539d-9729-4045-b15e-750c41e6cd0a","Type":"ContainerDied","Data":"cbe9c9e22cf05b2583caa598b3be52ee039b9307cd7bb872ec686dd7f7545533"} Jan 23 08:01:29 crc kubenswrapper[4784]: I0123 08:01:29.788526 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhtsq" event={"ID":"dc1a539d-9729-4045-b15e-750c41e6cd0a","Type":"ContainerStarted","Data":"bd18effa72ce176650d4c5d7ee6ce3f9c7a212136750719cae59a37ea4350812"} Jan 23 08:01:31 crc kubenswrapper[4784]: I0123 08:01:31.674609 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-5hnm9_0579fb88-f47a-4ef8-bd01-2dcb5aae28ac/kube-rbac-proxy/0.log" Jan 23 08:01:31 crc kubenswrapper[4784]: I0123 08:01:31.679216 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-qn8wr_2840186f-b624-458b-ba7b-988df9ebf049/frr-k8s-webhook-server/0.log" Jan 23 08:01:31 crc kubenswrapper[4784]: I0123 08:01:31.837839 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/cp-frr-files/0.log" Jan 23 08:01:31 crc kubenswrapper[4784]: I0123 08:01:31.984083 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-5hnm9_0579fb88-f47a-4ef8-bd01-2dcb5aae28ac/controller/0.log" Jan 23 08:01:32 crc kubenswrapper[4784]: I0123 08:01:32.061317 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/cp-frr-files/0.log" Jan 23 08:01:32 crc kubenswrapper[4784]: I0123 08:01:32.079702 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/cp-reloader/0.log" Jan 23 08:01:32 crc kubenswrapper[4784]: I0123 08:01:32.105159 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/cp-metrics/0.log" Jan 23 08:01:32 crc kubenswrapper[4784]: I0123 08:01:32.178179 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/cp-reloader/0.log" Jan 23 08:01:32 crc kubenswrapper[4784]: I0123 08:01:32.362538 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/cp-reloader/0.log" Jan 23 08:01:32 crc kubenswrapper[4784]: I0123 08:01:32.398445 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/cp-metrics/0.log" Jan 23 08:01:32 crc kubenswrapper[4784]: I0123 08:01:32.415276 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/cp-frr-files/0.log" Jan 23 08:01:32 crc kubenswrapper[4784]: I0123 08:01:32.439963 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/cp-metrics/0.log" Jan 23 08:01:32 crc kubenswrapper[4784]: I0123 08:01:32.581391 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/cp-frr-files/0.log" Jan 23 08:01:32 crc kubenswrapper[4784]: I0123 08:01:32.625192 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/cp-reloader/0.log" Jan 23 08:01:32 crc kubenswrapper[4784]: I0123 08:01:32.628305 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/cp-metrics/0.log" Jan 23 08:01:32 crc kubenswrapper[4784]: I0123 08:01:32.636479 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/controller/0.log" Jan 23 08:01:32 crc kubenswrapper[4784]: I0123 08:01:32.820673 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/frr-metrics/0.log" Jan 23 08:01:32 crc kubenswrapper[4784]: I0123 08:01:32.834574 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/kube-rbac-proxy-frr/0.log" Jan 23 08:01:32 crc kubenswrapper[4784]: I0123 08:01:32.878112 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/kube-rbac-proxy/0.log" Jan 23 08:01:33 crc kubenswrapper[4784]: I0123 08:01:33.000128 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/reloader/0.log" Jan 23 08:01:33 crc kubenswrapper[4784]: I0123 08:01:33.097362 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8589677cff-dzl65_47ec951f-c0f2-40f8-9361-6ca608819c25/manager/1.log" Jan 23 08:01:33 crc kubenswrapper[4784]: I0123 08:01:33.232422 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8589677cff-dzl65_47ec951f-c0f2-40f8-9361-6ca608819c25/manager/0.log" Jan 23 08:01:33 crc kubenswrapper[4784]: I0123 08:01:33.319437 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-59c99db6cd-6k4nj_5207d75f-f4c3-4c7d-861b-5f30efec8c5f/webhook-server/0.log" Jan 23 08:01:33 crc kubenswrapper[4784]: I0123 08:01:33.478252 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5j8cg_cb4d7810-332e-403f-96e6-827f7b0881e2/kube-rbac-proxy/0.log" Jan 23 08:01:34 crc kubenswrapper[4784]: I0123 08:01:34.015482 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5j8cg_cb4d7810-332e-403f-96e6-827f7b0881e2/speaker/0.log" Jan 23 08:01:34 crc kubenswrapper[4784]: I0123 08:01:34.724098 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wlldd_d0d8decf-1b4d-447f-9a00-301cb0c4b716/frr/0.log" Jan 23 08:01:34 crc kubenswrapper[4784]: I0123 08:01:34.836369 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhtsq" event={"ID":"dc1a539d-9729-4045-b15e-750c41e6cd0a","Type":"ContainerStarted","Data":"d44c9541cd764c79099e3c47c55104983e23c928280043d9f0565d77280eff33"} Jan 23 08:01:35 crc kubenswrapper[4784]: I0123 08:01:35.847139 4784 generic.go:334] "Generic (PLEG): container finished" podID="dc1a539d-9729-4045-b15e-750c41e6cd0a" containerID="d44c9541cd764c79099e3c47c55104983e23c928280043d9f0565d77280eff33" exitCode=0 Jan 23 08:01:35 crc kubenswrapper[4784]: I0123 08:01:35.847215 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhtsq" event={"ID":"dc1a539d-9729-4045-b15e-750c41e6cd0a","Type":"ContainerDied","Data":"d44c9541cd764c79099e3c47c55104983e23c928280043d9f0565d77280eff33"} Jan 23 08:01:41 crc kubenswrapper[4784]: I0123 08:01:41.929622 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhtsq" event={"ID":"dc1a539d-9729-4045-b15e-750c41e6cd0a","Type":"ContainerStarted","Data":"5ddc4ae93ccc5a97c683b20b021d48c7a868da5e1a9bdbb98878285023bff41a"} Jan 23 08:01:41 crc kubenswrapper[4784]: I0123 08:01:41.961045 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vhtsq" podStartSLOduration=2.688289749 podStartE2EDuration="13.961021446s" podCreationTimestamp="2026-01-23 08:01:28 +0000 UTC" firstStartedPulling="2026-01-23 08:01:29.790653857 +0000 UTC m=+6093.023161851" lastFinishedPulling="2026-01-23 08:01:41.063385574 +0000 UTC m=+6104.295893548" observedRunningTime="2026-01-23 08:01:41.95769354 +0000 UTC m=+6105.190201534" watchObservedRunningTime="2026-01-23 08:01:41.961021446 +0000 UTC m=+6105.193529420" Jan 23 08:01:42 crc kubenswrapper[4784]: I0123 08:01:42.271521 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 08:01:42 crc kubenswrapper[4784]: E0123 08:01:42.272244 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:01:48 crc kubenswrapper[4784]: I0123 08:01:48.376288 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj_b0d55601-449e-4c1e-a99c-bbe643195ad1/util/0.log" Jan 23 08:01:48 crc kubenswrapper[4784]: I0123 08:01:48.502195 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj_b0d55601-449e-4c1e-a99c-bbe643195ad1/pull/0.log" Jan 23 08:01:48 crc kubenswrapper[4784]: I0123 08:01:48.516534 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj_b0d55601-449e-4c1e-a99c-bbe643195ad1/util/0.log" Jan 23 08:01:48 crc kubenswrapper[4784]: I0123 08:01:48.599583 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj_b0d55601-449e-4c1e-a99c-bbe643195ad1/pull/0.log" Jan 23 08:01:48 crc kubenswrapper[4784]: I0123 08:01:48.746547 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj_b0d55601-449e-4c1e-a99c-bbe643195ad1/util/0.log" Jan 23 08:01:48 crc kubenswrapper[4784]: I0123 08:01:48.764468 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj_b0d55601-449e-4c1e-a99c-bbe643195ad1/extract/0.log" Jan 23 08:01:48 crc kubenswrapper[4784]: I0123 08:01:48.764511 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8p8wj_b0d55601-449e-4c1e-a99c-bbe643195ad1/pull/0.log" Jan 23 08:01:48 crc kubenswrapper[4784]: I0123 08:01:48.793492 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:48 crc kubenswrapper[4784]: I0123 08:01:48.793550 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:48 crc kubenswrapper[4784]: I0123 08:01:48.843013 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:48 crc kubenswrapper[4784]: I0123 08:01:48.934099 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc_eac5d9d9-1017-4063-b4a9-18b05eece465/util/0.log" Jan 23 08:01:49 crc kubenswrapper[4784]: I0123 08:01:49.059825 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:49 crc kubenswrapper[4784]: I0123 08:01:49.119334 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc_eac5d9d9-1017-4063-b4a9-18b05eece465/util/0.log" Jan 23 08:01:49 crc kubenswrapper[4784]: I0123 08:01:49.122623 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vhtsq"] Jan 23 08:01:49 crc kubenswrapper[4784]: I0123 08:01:49.155036 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc_eac5d9d9-1017-4063-b4a9-18b05eece465/pull/0.log" Jan 23 08:01:49 crc kubenswrapper[4784]: I0123 08:01:49.155133 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc_eac5d9d9-1017-4063-b4a9-18b05eece465/pull/0.log" Jan 23 08:01:49 crc kubenswrapper[4784]: I0123 08:01:49.300285 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc_eac5d9d9-1017-4063-b4a9-18b05eece465/util/0.log" Jan 23 08:01:49 crc kubenswrapper[4784]: I0123 08:01:49.313241 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc_eac5d9d9-1017-4063-b4a9-18b05eece465/pull/0.log" Jan 23 08:01:49 crc kubenswrapper[4784]: I0123 08:01:49.374938 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vkwdc_eac5d9d9-1017-4063-b4a9-18b05eece465/extract/0.log" Jan 23 08:01:49 crc kubenswrapper[4784]: I0123 08:01:49.493209 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp_0bfddef6-60e2-416e-b320-20567c696fc4/util/0.log" Jan 23 08:01:49 crc kubenswrapper[4784]: I0123 08:01:49.639806 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp_0bfddef6-60e2-416e-b320-20567c696fc4/util/0.log" Jan 23 08:01:49 crc kubenswrapper[4784]: I0123 08:01:49.648412 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp_0bfddef6-60e2-416e-b320-20567c696fc4/pull/0.log" Jan 23 08:01:49 crc kubenswrapper[4784]: I0123 08:01:49.678257 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp_0bfddef6-60e2-416e-b320-20567c696fc4/pull/0.log" Jan 23 08:01:49 crc kubenswrapper[4784]: I0123 08:01:49.842084 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp_0bfddef6-60e2-416e-b320-20567c696fc4/pull/0.log" Jan 23 08:01:49 crc kubenswrapper[4784]: I0123 08:01:49.880748 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp_0bfddef6-60e2-416e-b320-20567c696fc4/util/0.log" Jan 23 08:01:49 crc kubenswrapper[4784]: I0123 08:01:49.880897 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082gtlp_0bfddef6-60e2-416e-b320-20567c696fc4/extract/0.log" Jan 23 08:01:50 crc kubenswrapper[4784]: I0123 08:01:50.018118 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bpw4g_d0d8177c-add0-4980-ae22-44a0ede0a599/extract-utilities/0.log" Jan 23 08:01:50 crc kubenswrapper[4784]: I0123 08:01:50.163730 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bpw4g_d0d8177c-add0-4980-ae22-44a0ede0a599/extract-utilities/0.log" Jan 23 08:01:50 crc kubenswrapper[4784]: I0123 08:01:50.165565 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bpw4g_d0d8177c-add0-4980-ae22-44a0ede0a599/extract-content/0.log" Jan 23 08:01:50 crc kubenswrapper[4784]: I0123 08:01:50.227572 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bpw4g_d0d8177c-add0-4980-ae22-44a0ede0a599/extract-content/0.log" Jan 23 08:01:50 crc kubenswrapper[4784]: I0123 08:01:50.345118 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bpw4g_d0d8177c-add0-4980-ae22-44a0ede0a599/extract-utilities/0.log" Jan 23 08:01:50 crc kubenswrapper[4784]: I0123 08:01:50.347131 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bpw4g_d0d8177c-add0-4980-ae22-44a0ede0a599/extract-content/0.log" Jan 23 08:01:50 crc kubenswrapper[4784]: I0123 08:01:50.552509 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vhtsq_dc1a539d-9729-4045-b15e-750c41e6cd0a/extract-utilities/0.log" Jan 23 08:01:50 crc kubenswrapper[4784]: I0123 08:01:50.769279 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vhtsq_dc1a539d-9729-4045-b15e-750c41e6cd0a/extract-content/0.log" Jan 23 08:01:50 crc kubenswrapper[4784]: I0123 08:01:50.811078 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vhtsq_dc1a539d-9729-4045-b15e-750c41e6cd0a/extract-content/0.log" Jan 23 08:01:50 crc kubenswrapper[4784]: I0123 08:01:50.825665 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vhtsq_dc1a539d-9729-4045-b15e-750c41e6cd0a/extract-utilities/0.log" Jan 23 08:01:51 crc kubenswrapper[4784]: I0123 08:01:51.015515 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vhtsq" podUID="dc1a539d-9729-4045-b15e-750c41e6cd0a" containerName="registry-server" containerID="cri-o://5ddc4ae93ccc5a97c683b20b021d48c7a868da5e1a9bdbb98878285023bff41a" gracePeriod=2 Jan 23 08:01:51 crc kubenswrapper[4784]: I0123 08:01:51.068207 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vhtsq_dc1a539d-9729-4045-b15e-750c41e6cd0a/extract-utilities/0.log" Jan 23 08:01:51 crc kubenswrapper[4784]: I0123 08:01:51.104389 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vhtsq_dc1a539d-9729-4045-b15e-750c41e6cd0a/extract-content/0.log" Jan 23 08:01:51 crc kubenswrapper[4784]: I0123 08:01:51.258158 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bpw4g_d0d8177c-add0-4980-ae22-44a0ede0a599/registry-server/0.log" Jan 23 08:01:51 crc kubenswrapper[4784]: I0123 08:01:51.355011 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-26lkr_a98abc92-b990-4885-af1c-221be1db3652/extract-utilities/0.log" Jan 23 08:01:51 crc kubenswrapper[4784]: I0123 08:01:51.498552 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-26lkr_a98abc92-b990-4885-af1c-221be1db3652/extract-content/0.log" Jan 23 08:01:51 crc kubenswrapper[4784]: I0123 08:01:51.525250 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-26lkr_a98abc92-b990-4885-af1c-221be1db3652/extract-content/0.log" Jan 23 08:01:51 crc kubenswrapper[4784]: I0123 08:01:51.525655 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-26lkr_a98abc92-b990-4885-af1c-221be1db3652/extract-utilities/0.log" Jan 23 08:01:51 crc kubenswrapper[4784]: I0123 08:01:51.700319 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-26lkr_a98abc92-b990-4885-af1c-221be1db3652/extract-utilities/0.log" Jan 23 08:01:51 crc kubenswrapper[4784]: I0123 08:01:51.724478 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-26lkr_a98abc92-b990-4885-af1c-221be1db3652/extract-content/0.log" Jan 23 08:01:51 crc kubenswrapper[4784]: I0123 08:01:51.786776 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vhtsq_dc1a539d-9729-4045-b15e-750c41e6cd0a/registry-server/0.log" Jan 23 08:01:51 crc kubenswrapper[4784]: I0123 08:01:51.996309 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-srwlm_4180fe07-d016-4462-8f55-9da994cc6827/marketplace-operator/0.log" Jan 23 08:01:52 crc kubenswrapper[4784]: I0123 08:01:52.027227 4784 generic.go:334] "Generic (PLEG): container finished" podID="dc1a539d-9729-4045-b15e-750c41e6cd0a" containerID="5ddc4ae93ccc5a97c683b20b021d48c7a868da5e1a9bdbb98878285023bff41a" exitCode=0 Jan 23 08:01:52 crc kubenswrapper[4784]: I0123 08:01:52.027278 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhtsq" event={"ID":"dc1a539d-9729-4045-b15e-750c41e6cd0a","Type":"ContainerDied","Data":"5ddc4ae93ccc5a97c683b20b021d48c7a868da5e1a9bdbb98878285023bff41a"} Jan 23 08:01:52 crc kubenswrapper[4784]: I0123 08:01:52.199847 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dtrsz_e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1/extract-utilities/0.log" Jan 23 08:01:52 crc kubenswrapper[4784]: I0123 08:01:52.426948 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dtrsz_e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1/extract-content/0.log" Jan 23 08:01:52 crc kubenswrapper[4784]: I0123 08:01:52.426993 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dtrsz_e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1/extract-utilities/0.log" Jan 23 08:01:52 crc kubenswrapper[4784]: I0123 08:01:52.657014 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dtrsz_e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1/extract-content/0.log" Jan 23 08:01:52 crc kubenswrapper[4784]: I0123 08:01:52.791259 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dtrsz_e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1/extract-utilities/0.log" Jan 23 08:01:52 crc kubenswrapper[4784]: I0123 08:01:52.791582 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dtrsz_e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1/extract-content/0.log" Jan 23 08:01:53 crc kubenswrapper[4784]: I0123 08:01:53.035116 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fws6t_9ec609ce-97c4-4d5c-9621-2845609c71f1/extract-utilities/0.log" Jan 23 08:01:53 crc kubenswrapper[4784]: I0123 08:01:53.093475 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dtrsz_e58dcf77-8d95-4d5a-8fb3-ed8a463a18b1/registry-server/0.log" Jan 23 08:01:53 crc kubenswrapper[4784]: I0123 08:01:53.285869 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fws6t_9ec609ce-97c4-4d5c-9621-2845609c71f1/extract-utilities/0.log" Jan 23 08:01:53 crc kubenswrapper[4784]: I0123 08:01:53.302833 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fws6t_9ec609ce-97c4-4d5c-9621-2845609c71f1/extract-content/0.log" Jan 23 08:01:53 crc kubenswrapper[4784]: I0123 08:01:53.311409 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fws6t_9ec609ce-97c4-4d5c-9621-2845609c71f1/extract-content/0.log" Jan 23 08:01:53 crc kubenswrapper[4784]: I0123 08:01:53.491870 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fws6t_9ec609ce-97c4-4d5c-9621-2845609c71f1/extract-content/0.log" Jan 23 08:01:53 crc kubenswrapper[4784]: I0123 08:01:53.496203 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fws6t_9ec609ce-97c4-4d5c-9621-2845609c71f1/extract-utilities/0.log" Jan 23 08:01:53 crc kubenswrapper[4784]: I0123 08:01:53.936728 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-26lkr_a98abc92-b990-4885-af1c-221be1db3652/registry-server/0.log" Jan 23 08:01:54 crc kubenswrapper[4784]: I0123 08:01:54.047463 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhtsq" event={"ID":"dc1a539d-9729-4045-b15e-750c41e6cd0a","Type":"ContainerDied","Data":"bd18effa72ce176650d4c5d7ee6ce3f9c7a212136750719cae59a37ea4350812"} Jan 23 08:01:54 crc kubenswrapper[4784]: I0123 08:01:54.047557 4784 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd18effa72ce176650d4c5d7ee6ce3f9c7a212136750719cae59a37ea4350812" Jan 23 08:01:54 crc kubenswrapper[4784]: I0123 08:01:54.082422 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:54 crc kubenswrapper[4784]: I0123 08:01:54.245739 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzltw\" (UniqueName: \"kubernetes.io/projected/dc1a539d-9729-4045-b15e-750c41e6cd0a-kube-api-access-qzltw\") pod \"dc1a539d-9729-4045-b15e-750c41e6cd0a\" (UID: \"dc1a539d-9729-4045-b15e-750c41e6cd0a\") " Jan 23 08:01:54 crc kubenswrapper[4784]: I0123 08:01:54.245868 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc1a539d-9729-4045-b15e-750c41e6cd0a-catalog-content\") pod \"dc1a539d-9729-4045-b15e-750c41e6cd0a\" (UID: \"dc1a539d-9729-4045-b15e-750c41e6cd0a\") " Jan 23 08:01:54 crc kubenswrapper[4784]: I0123 08:01:54.245917 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc1a539d-9729-4045-b15e-750c41e6cd0a-utilities\") pod \"dc1a539d-9729-4045-b15e-750c41e6cd0a\" (UID: \"dc1a539d-9729-4045-b15e-750c41e6cd0a\") " Jan 23 08:01:54 crc kubenswrapper[4784]: I0123 08:01:54.246784 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc1a539d-9729-4045-b15e-750c41e6cd0a-utilities" (OuterVolumeSpecName: "utilities") pod "dc1a539d-9729-4045-b15e-750c41e6cd0a" (UID: "dc1a539d-9729-4045-b15e-750c41e6cd0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:01:54 crc kubenswrapper[4784]: I0123 08:01:54.270766 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc1a539d-9729-4045-b15e-750c41e6cd0a-kube-api-access-qzltw" (OuterVolumeSpecName: "kube-api-access-qzltw") pod "dc1a539d-9729-4045-b15e-750c41e6cd0a" (UID: "dc1a539d-9729-4045-b15e-750c41e6cd0a"). InnerVolumeSpecName "kube-api-access-qzltw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:01:54 crc kubenswrapper[4784]: I0123 08:01:54.306371 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc1a539d-9729-4045-b15e-750c41e6cd0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc1a539d-9729-4045-b15e-750c41e6cd0a" (UID: "dc1a539d-9729-4045-b15e-750c41e6cd0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:01:54 crc kubenswrapper[4784]: I0123 08:01:54.349149 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzltw\" (UniqueName: \"kubernetes.io/projected/dc1a539d-9729-4045-b15e-750c41e6cd0a-kube-api-access-qzltw\") on node \"crc\" DevicePath \"\"" Jan 23 08:01:54 crc kubenswrapper[4784]: I0123 08:01:54.349181 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc1a539d-9729-4045-b15e-750c41e6cd0a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:01:54 crc kubenswrapper[4784]: I0123 08:01:54.349195 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc1a539d-9729-4045-b15e-750c41e6cd0a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:01:54 crc kubenswrapper[4784]: I0123 08:01:54.561488 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fws6t_9ec609ce-97c4-4d5c-9621-2845609c71f1/registry-server/0.log" Jan 23 08:01:55 crc kubenswrapper[4784]: I0123 08:01:55.056041 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhtsq" Jan 23 08:01:55 crc kubenswrapper[4784]: I0123 08:01:55.109018 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vhtsq"] Jan 23 08:01:55 crc kubenswrapper[4784]: I0123 08:01:55.123146 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vhtsq"] Jan 23 08:01:55 crc kubenswrapper[4784]: I0123 08:01:55.266807 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc1a539d-9729-4045-b15e-750c41e6cd0a" path="/var/lib/kubelet/pods/dc1a539d-9729-4045-b15e-750c41e6cd0a/volumes" Jan 23 08:01:56 crc kubenswrapper[4784]: I0123 08:01:56.254291 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 08:01:57 crc kubenswrapper[4784]: I0123 08:01:57.077892 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"19505267ea5b25506f7f335acc557c58b8cb54b996bf67195f3ea8781b5d6cdf"} Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.601092 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-whhn4"] Jan 23 08:02:06 crc kubenswrapper[4784]: E0123 08:02:06.602410 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1a539d-9729-4045-b15e-750c41e6cd0a" containerName="registry-server" Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.602428 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1a539d-9729-4045-b15e-750c41e6cd0a" containerName="registry-server" Jan 23 08:02:06 crc kubenswrapper[4784]: E0123 08:02:06.602458 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1a539d-9729-4045-b15e-750c41e6cd0a" containerName="extract-content" Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.602467 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1a539d-9729-4045-b15e-750c41e6cd0a" containerName="extract-content" Jan 23 08:02:06 crc kubenswrapper[4784]: E0123 08:02:06.602503 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1a539d-9729-4045-b15e-750c41e6cd0a" containerName="extract-utilities" Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.602515 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1a539d-9729-4045-b15e-750c41e6cd0a" containerName="extract-utilities" Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.602809 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc1a539d-9729-4045-b15e-750c41e6cd0a" containerName="registry-server" Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.604786 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.609200 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2133e73-c43c-452b-87e2-73734755ca87-utilities\") pod \"redhat-operators-whhn4\" (UID: \"b2133e73-c43c-452b-87e2-73734755ca87\") " pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.609627 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2133e73-c43c-452b-87e2-73734755ca87-catalog-content\") pod \"redhat-operators-whhn4\" (UID: \"b2133e73-c43c-452b-87e2-73734755ca87\") " pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.609812 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zlxb\" (UniqueName: \"kubernetes.io/projected/b2133e73-c43c-452b-87e2-73734755ca87-kube-api-access-6zlxb\") pod \"redhat-operators-whhn4\" (UID: \"b2133e73-c43c-452b-87e2-73734755ca87\") " pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.626973 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-whhn4"] Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.711907 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2133e73-c43c-452b-87e2-73734755ca87-catalog-content\") pod \"redhat-operators-whhn4\" (UID: \"b2133e73-c43c-452b-87e2-73734755ca87\") " pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.711947 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zlxb\" (UniqueName: \"kubernetes.io/projected/b2133e73-c43c-452b-87e2-73734755ca87-kube-api-access-6zlxb\") pod \"redhat-operators-whhn4\" (UID: \"b2133e73-c43c-452b-87e2-73734755ca87\") " pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.712097 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2133e73-c43c-452b-87e2-73734755ca87-utilities\") pod \"redhat-operators-whhn4\" (UID: \"b2133e73-c43c-452b-87e2-73734755ca87\") " pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.712773 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2133e73-c43c-452b-87e2-73734755ca87-utilities\") pod \"redhat-operators-whhn4\" (UID: \"b2133e73-c43c-452b-87e2-73734755ca87\") " pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.713073 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2133e73-c43c-452b-87e2-73734755ca87-catalog-content\") pod \"redhat-operators-whhn4\" (UID: \"b2133e73-c43c-452b-87e2-73734755ca87\") " pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.754181 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zlxb\" (UniqueName: \"kubernetes.io/projected/b2133e73-c43c-452b-87e2-73734755ca87-kube-api-access-6zlxb\") pod \"redhat-operators-whhn4\" (UID: \"b2133e73-c43c-452b-87e2-73734755ca87\") " pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:06 crc kubenswrapper[4784]: I0123 08:02:06.959660 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:07 crc kubenswrapper[4784]: I0123 08:02:07.499851 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-whhn4"] Jan 23 08:02:08 crc kubenswrapper[4784]: I0123 08:02:08.189487 4784 generic.go:334] "Generic (PLEG): container finished" podID="b2133e73-c43c-452b-87e2-73734755ca87" containerID="dcf73e17022eb8a019c6c9afd5ebd2a83a877c99ffcd924aebb46631d567a459" exitCode=0 Jan 23 08:02:08 crc kubenswrapper[4784]: I0123 08:02:08.189541 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whhn4" event={"ID":"b2133e73-c43c-452b-87e2-73734755ca87","Type":"ContainerDied","Data":"dcf73e17022eb8a019c6c9afd5ebd2a83a877c99ffcd924aebb46631d567a459"} Jan 23 08:02:08 crc kubenswrapper[4784]: I0123 08:02:08.189719 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whhn4" event={"ID":"b2133e73-c43c-452b-87e2-73734755ca87","Type":"ContainerStarted","Data":"6afc79c51168ef45d0f21e6d7c645f6787dd9e4daf52d10ecf7d106474b7dfc2"} Jan 23 08:02:08 crc kubenswrapper[4784]: I0123 08:02:08.191736 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:02:08 crc kubenswrapper[4784]: I0123 08:02:08.990505 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-789dd76d8c-pfkcc_6031f0e9-6391-4a28-8473-a458ec564ad6/prometheus-operator-admission-webhook/0.log" Jan 23 08:02:08 crc kubenswrapper[4784]: I0123 08:02:08.990928 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-xcvs9_19b07fe7-1025-43fc-a462-4aaef0fe9833/prometheus-operator/0.log" Jan 23 08:02:09 crc kubenswrapper[4784]: I0123 08:02:09.223613 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-789dd76d8c-t8hjf_7cb0c63a-cabc-45c2-84b7-54ae314e802d/prometheus-operator-admission-webhook/0.log" Jan 23 08:02:09 crc kubenswrapper[4784]: I0123 08:02:09.381350 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-2gwnx_1e753c42-8648-4fb6-afeb-6cb5218b1e37/operator/0.log" Jan 23 08:02:09 crc kubenswrapper[4784]: I0123 08:02:09.400478 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-k9d2w_d5f9d7a6-d264-4964-8476-a72023915b07/perses-operator/0.log" Jan 23 08:02:12 crc kubenswrapper[4784]: I0123 08:02:12.235834 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whhn4" event={"ID":"b2133e73-c43c-452b-87e2-73734755ca87","Type":"ContainerStarted","Data":"e2567ee11ad167f5cb28ca76ae1635d607dc387d423d52c3aeeb06e1c2dc2be1"} Jan 23 08:02:14 crc kubenswrapper[4784]: I0123 08:02:14.256560 4784 generic.go:334] "Generic (PLEG): container finished" podID="b2133e73-c43c-452b-87e2-73734755ca87" containerID="e2567ee11ad167f5cb28ca76ae1635d607dc387d423d52c3aeeb06e1c2dc2be1" exitCode=0 Jan 23 08:02:14 crc kubenswrapper[4784]: I0123 08:02:14.256645 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whhn4" event={"ID":"b2133e73-c43c-452b-87e2-73734755ca87","Type":"ContainerDied","Data":"e2567ee11ad167f5cb28ca76ae1635d607dc387d423d52c3aeeb06e1c2dc2be1"} Jan 23 08:02:15 crc kubenswrapper[4784]: I0123 08:02:15.268627 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whhn4" event={"ID":"b2133e73-c43c-452b-87e2-73734755ca87","Type":"ContainerStarted","Data":"17b8a7b80100af755a226fba343faf4abf115f8a7fd4c12ccfcea37a5918aabd"} Jan 23 08:02:16 crc kubenswrapper[4784]: I0123 08:02:16.297318 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-whhn4" podStartSLOduration=3.456361044 podStartE2EDuration="10.297297581s" podCreationTimestamp="2026-01-23 08:02:06 +0000 UTC" firstStartedPulling="2026-01-23 08:02:08.191442395 +0000 UTC m=+6131.423950369" lastFinishedPulling="2026-01-23 08:02:15.032378932 +0000 UTC m=+6138.264886906" observedRunningTime="2026-01-23 08:02:16.293158106 +0000 UTC m=+6139.525666080" watchObservedRunningTime="2026-01-23 08:02:16.297297581 +0000 UTC m=+6139.529805555" Jan 23 08:02:16 crc kubenswrapper[4784]: I0123 08:02:16.960077 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:16 crc kubenswrapper[4784]: I0123 08:02:16.960385 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:18 crc kubenswrapper[4784]: I0123 08:02:18.039540 4784 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-whhn4" podUID="b2133e73-c43c-452b-87e2-73734755ca87" containerName="registry-server" probeResult="failure" output=< Jan 23 08:02:18 crc kubenswrapper[4784]: timeout: failed to connect service ":50051" within 1s Jan 23 08:02:18 crc kubenswrapper[4784]: > Jan 23 08:02:27 crc kubenswrapper[4784]: I0123 08:02:27.040291 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:27 crc kubenswrapper[4784]: I0123 08:02:27.125631 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:27 crc kubenswrapper[4784]: I0123 08:02:27.292239 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-whhn4"] Jan 23 08:02:28 crc kubenswrapper[4784]: I0123 08:02:28.407150 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-whhn4" podUID="b2133e73-c43c-452b-87e2-73734755ca87" containerName="registry-server" containerID="cri-o://17b8a7b80100af755a226fba343faf4abf115f8a7fd4c12ccfcea37a5918aabd" gracePeriod=2 Jan 23 08:02:29 crc kubenswrapper[4784]: I0123 08:02:29.418342 4784 generic.go:334] "Generic (PLEG): container finished" podID="b2133e73-c43c-452b-87e2-73734755ca87" containerID="17b8a7b80100af755a226fba343faf4abf115f8a7fd4c12ccfcea37a5918aabd" exitCode=0 Jan 23 08:02:29 crc kubenswrapper[4784]: I0123 08:02:29.418417 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whhn4" event={"ID":"b2133e73-c43c-452b-87e2-73734755ca87","Type":"ContainerDied","Data":"17b8a7b80100af755a226fba343faf4abf115f8a7fd4c12ccfcea37a5918aabd"} Jan 23 08:02:29 crc kubenswrapper[4784]: I0123 08:02:29.657405 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:29 crc kubenswrapper[4784]: I0123 08:02:29.789050 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2133e73-c43c-452b-87e2-73734755ca87-utilities\") pod \"b2133e73-c43c-452b-87e2-73734755ca87\" (UID: \"b2133e73-c43c-452b-87e2-73734755ca87\") " Jan 23 08:02:29 crc kubenswrapper[4784]: I0123 08:02:29.789200 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2133e73-c43c-452b-87e2-73734755ca87-catalog-content\") pod \"b2133e73-c43c-452b-87e2-73734755ca87\" (UID: \"b2133e73-c43c-452b-87e2-73734755ca87\") " Jan 23 08:02:29 crc kubenswrapper[4784]: I0123 08:02:29.789450 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zlxb\" (UniqueName: \"kubernetes.io/projected/b2133e73-c43c-452b-87e2-73734755ca87-kube-api-access-6zlxb\") pod \"b2133e73-c43c-452b-87e2-73734755ca87\" (UID: \"b2133e73-c43c-452b-87e2-73734755ca87\") " Jan 23 08:02:29 crc kubenswrapper[4784]: I0123 08:02:29.793224 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2133e73-c43c-452b-87e2-73734755ca87-utilities" (OuterVolumeSpecName: "utilities") pod "b2133e73-c43c-452b-87e2-73734755ca87" (UID: "b2133e73-c43c-452b-87e2-73734755ca87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:02:29 crc kubenswrapper[4784]: I0123 08:02:29.824739 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2133e73-c43c-452b-87e2-73734755ca87-kube-api-access-6zlxb" (OuterVolumeSpecName: "kube-api-access-6zlxb") pod "b2133e73-c43c-452b-87e2-73734755ca87" (UID: "b2133e73-c43c-452b-87e2-73734755ca87"). InnerVolumeSpecName "kube-api-access-6zlxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:02:29 crc kubenswrapper[4784]: I0123 08:02:29.893294 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zlxb\" (UniqueName: \"kubernetes.io/projected/b2133e73-c43c-452b-87e2-73734755ca87-kube-api-access-6zlxb\") on node \"crc\" DevicePath \"\"" Jan 23 08:02:29 crc kubenswrapper[4784]: I0123 08:02:29.893541 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2133e73-c43c-452b-87e2-73734755ca87-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:02:29 crc kubenswrapper[4784]: I0123 08:02:29.944806 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2133e73-c43c-452b-87e2-73734755ca87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2133e73-c43c-452b-87e2-73734755ca87" (UID: "b2133e73-c43c-452b-87e2-73734755ca87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:02:29 crc kubenswrapper[4784]: I0123 08:02:29.996099 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2133e73-c43c-452b-87e2-73734755ca87-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:02:30 crc kubenswrapper[4784]: I0123 08:02:30.429817 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whhn4" event={"ID":"b2133e73-c43c-452b-87e2-73734755ca87","Type":"ContainerDied","Data":"6afc79c51168ef45d0f21e6d7c645f6787dd9e4daf52d10ecf7d106474b7dfc2"} Jan 23 08:02:30 crc kubenswrapper[4784]: I0123 08:02:30.429875 4784 scope.go:117] "RemoveContainer" containerID="17b8a7b80100af755a226fba343faf4abf115f8a7fd4c12ccfcea37a5918aabd" Jan 23 08:02:30 crc kubenswrapper[4784]: I0123 08:02:30.430043 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whhn4" Jan 23 08:02:30 crc kubenswrapper[4784]: I0123 08:02:30.467660 4784 scope.go:117] "RemoveContainer" containerID="e2567ee11ad167f5cb28ca76ae1635d607dc387d423d52c3aeeb06e1c2dc2be1" Jan 23 08:02:30 crc kubenswrapper[4784]: I0123 08:02:30.486232 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-whhn4"] Jan 23 08:02:30 crc kubenswrapper[4784]: I0123 08:02:30.498558 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-whhn4"] Jan 23 08:02:30 crc kubenswrapper[4784]: I0123 08:02:30.510021 4784 scope.go:117] "RemoveContainer" containerID="dcf73e17022eb8a019c6c9afd5ebd2a83a877c99ffcd924aebb46631d567a459" Jan 23 08:02:31 crc kubenswrapper[4784]: I0123 08:02:31.270450 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2133e73-c43c-452b-87e2-73734755ca87" path="/var/lib/kubelet/pods/b2133e73-c43c-452b-87e2-73734755ca87/volumes" Jan 23 08:04:06 crc kubenswrapper[4784]: I0123 08:04:06.680780 4784 generic.go:334] "Generic (PLEG): container finished" podID="21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3" containerID="bef4bc88dd5e13095397e005a8a2c4b203eab1cfb56107d133653d463066bd02" exitCode=0 Jan 23 08:04:06 crc kubenswrapper[4784]: I0123 08:04:06.680851 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nthhr/must-gather-9jtnb" event={"ID":"21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3","Type":"ContainerDied","Data":"bef4bc88dd5e13095397e005a8a2c4b203eab1cfb56107d133653d463066bd02"} Jan 23 08:04:06 crc kubenswrapper[4784]: I0123 08:04:06.681996 4784 scope.go:117] "RemoveContainer" containerID="bef4bc88dd5e13095397e005a8a2c4b203eab1cfb56107d133653d463066bd02" Jan 23 08:04:07 crc kubenswrapper[4784]: I0123 08:04:07.726699 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nthhr_must-gather-9jtnb_21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3/gather/0.log" Jan 23 08:04:16 crc kubenswrapper[4784]: I0123 08:04:16.952212 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nthhr/must-gather-9jtnb"] Jan 23 08:04:16 crc kubenswrapper[4784]: I0123 08:04:16.952944 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-nthhr/must-gather-9jtnb" podUID="21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3" containerName="copy" containerID="cri-o://deb4e970495a51618120a86314fdde209b0ecab509452cc692d28e9c78f0f66a" gracePeriod=2 Jan 23 08:04:16 crc kubenswrapper[4784]: I0123 08:04:16.966645 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nthhr/must-gather-9jtnb"] Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.522125 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nthhr_must-gather-9jtnb_21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3/copy/0.log" Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.523149 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/must-gather-9jtnb" Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.584500 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxll6\" (UniqueName: \"kubernetes.io/projected/21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3-kube-api-access-bxll6\") pod \"21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3\" (UID: \"21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3\") " Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.584960 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3-must-gather-output\") pod \"21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3\" (UID: \"21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3\") " Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.604465 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3-kube-api-access-bxll6" (OuterVolumeSpecName: "kube-api-access-bxll6") pod "21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3" (UID: "21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3"). InnerVolumeSpecName "kube-api-access-bxll6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.687264 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxll6\" (UniqueName: \"kubernetes.io/projected/21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3-kube-api-access-bxll6\") on node \"crc\" DevicePath \"\"" Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.781964 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3" (UID: "21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.789067 4784 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.804994 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nthhr_must-gather-9jtnb_21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3/copy/0.log" Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.805362 4784 generic.go:334] "Generic (PLEG): container finished" podID="21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3" containerID="deb4e970495a51618120a86314fdde209b0ecab509452cc692d28e9c78f0f66a" exitCode=143 Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.805418 4784 scope.go:117] "RemoveContainer" containerID="deb4e970495a51618120a86314fdde209b0ecab509452cc692d28e9c78f0f66a" Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.805418 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nthhr/must-gather-9jtnb" Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.825293 4784 scope.go:117] "RemoveContainer" containerID="bef4bc88dd5e13095397e005a8a2c4b203eab1cfb56107d133653d463066bd02" Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.920235 4784 scope.go:117] "RemoveContainer" containerID="deb4e970495a51618120a86314fdde209b0ecab509452cc692d28e9c78f0f66a" Jan 23 08:04:17 crc kubenswrapper[4784]: E0123 08:04:17.920718 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deb4e970495a51618120a86314fdde209b0ecab509452cc692d28e9c78f0f66a\": container with ID starting with deb4e970495a51618120a86314fdde209b0ecab509452cc692d28e9c78f0f66a not found: ID does not exist" containerID="deb4e970495a51618120a86314fdde209b0ecab509452cc692d28e9c78f0f66a" Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.920830 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deb4e970495a51618120a86314fdde209b0ecab509452cc692d28e9c78f0f66a"} err="failed to get container status \"deb4e970495a51618120a86314fdde209b0ecab509452cc692d28e9c78f0f66a\": rpc error: code = NotFound desc = could not find container \"deb4e970495a51618120a86314fdde209b0ecab509452cc692d28e9c78f0f66a\": container with ID starting with deb4e970495a51618120a86314fdde209b0ecab509452cc692d28e9c78f0f66a not found: ID does not exist" Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.920981 4784 scope.go:117] "RemoveContainer" containerID="bef4bc88dd5e13095397e005a8a2c4b203eab1cfb56107d133653d463066bd02" Jan 23 08:04:17 crc kubenswrapper[4784]: E0123 08:04:17.921301 4784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bef4bc88dd5e13095397e005a8a2c4b203eab1cfb56107d133653d463066bd02\": container with ID starting with bef4bc88dd5e13095397e005a8a2c4b203eab1cfb56107d133653d463066bd02 not found: ID does not exist" containerID="bef4bc88dd5e13095397e005a8a2c4b203eab1cfb56107d133653d463066bd02" Jan 23 08:04:17 crc kubenswrapper[4784]: I0123 08:04:17.921331 4784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef4bc88dd5e13095397e005a8a2c4b203eab1cfb56107d133653d463066bd02"} err="failed to get container status \"bef4bc88dd5e13095397e005a8a2c4b203eab1cfb56107d133653d463066bd02\": rpc error: code = NotFound desc = could not find container \"bef4bc88dd5e13095397e005a8a2c4b203eab1cfb56107d133653d463066bd02\": container with ID starting with bef4bc88dd5e13095397e005a8a2c4b203eab1cfb56107d133653d463066bd02 not found: ID does not exist" Jan 23 08:04:19 crc kubenswrapper[4784]: I0123 08:04:19.265682 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3" path="/var/lib/kubelet/pods/21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3/volumes" Jan 23 08:04:23 crc kubenswrapper[4784]: I0123 08:04:23.603158 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:04:23 crc kubenswrapper[4784]: I0123 08:04:23.603703 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:04:43 crc kubenswrapper[4784]: I0123 08:04:43.170142 4784 scope.go:117] "RemoveContainer" containerID="7912bc6e15cab812b8efee1a8962bcc6421ee0851bf394329730fc4080dec49c" Jan 23 08:04:53 crc kubenswrapper[4784]: I0123 08:04:53.603806 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:04:53 crc kubenswrapper[4784]: I0123 08:04:53.604580 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:05:23 crc kubenswrapper[4784]: I0123 08:05:23.603868 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:05:23 crc kubenswrapper[4784]: I0123 08:05:23.604717 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:05:23 crc kubenswrapper[4784]: I0123 08:05:23.604841 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 08:05:23 crc kubenswrapper[4784]: I0123 08:05:23.606217 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19505267ea5b25506f7f335acc557c58b8cb54b996bf67195f3ea8781b5d6cdf"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:05:23 crc kubenswrapper[4784]: I0123 08:05:23.606358 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://19505267ea5b25506f7f335acc557c58b8cb54b996bf67195f3ea8781b5d6cdf" gracePeriod=600 Jan 23 08:05:24 crc kubenswrapper[4784]: I0123 08:05:24.582100 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="19505267ea5b25506f7f335acc557c58b8cb54b996bf67195f3ea8781b5d6cdf" exitCode=0 Jan 23 08:05:24 crc kubenswrapper[4784]: I0123 08:05:24.582181 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"19505267ea5b25506f7f335acc557c58b8cb54b996bf67195f3ea8781b5d6cdf"} Jan 23 08:05:24 crc kubenswrapper[4784]: I0123 08:05:24.582852 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerStarted","Data":"383410831a89af123c1795061358a3499af5fea3830dbaeaf6ecaeb090743c70"} Jan 23 08:05:24 crc kubenswrapper[4784]: I0123 08:05:24.582933 4784 scope.go:117] "RemoveContainer" containerID="e6231bb2f4cae0d33ca6b333a08bd93a97025235dc1c871e75e5a8acef7c521f" Jan 23 08:05:43 crc kubenswrapper[4784]: I0123 08:05:43.233908 4784 scope.go:117] "RemoveContainer" containerID="501978905cb42fb0b60588c1f4d83cda43cfd9d1870eb24604329b546e1b638b" Jan 23 08:07:23 crc kubenswrapper[4784]: I0123 08:07:23.603699 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:07:23 crc kubenswrapper[4784]: I0123 08:07:23.604430 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:07:43 crc kubenswrapper[4784]: I0123 08:07:43.330283 4784 scope.go:117] "RemoveContainer" containerID="5ddc4ae93ccc5a97c683b20b021d48c7a868da5e1a9bdbb98878285023bff41a" Jan 23 08:07:43 crc kubenswrapper[4784]: I0123 08:07:43.364305 4784 scope.go:117] "RemoveContainer" containerID="d44c9541cd764c79099e3c47c55104983e23c928280043d9f0565d77280eff33" Jan 23 08:07:43 crc kubenswrapper[4784]: I0123 08:07:43.393809 4784 scope.go:117] "RemoveContainer" containerID="cbe9c9e22cf05b2583caa598b3be52ee039b9307cd7bb872ec686dd7f7545533" Jan 23 08:07:53 crc kubenswrapper[4784]: I0123 08:07:53.602921 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:07:53 crc kubenswrapper[4784]: I0123 08:07:53.603638 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:08:23 crc kubenswrapper[4784]: I0123 08:08:23.603024 4784 patch_prober.go:28] interesting pod/machine-config-daemon-r7dpd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:08:23 crc kubenswrapper[4784]: I0123 08:08:23.603976 4784 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:08:23 crc kubenswrapper[4784]: I0123 08:08:23.604058 4784 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" Jan 23 08:08:23 crc kubenswrapper[4784]: I0123 08:08:23.605291 4784 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"383410831a89af123c1795061358a3499af5fea3830dbaeaf6ecaeb090743c70"} pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:08:23 crc kubenswrapper[4784]: I0123 08:08:23.605401 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerName="machine-config-daemon" containerID="cri-o://383410831a89af123c1795061358a3499af5fea3830dbaeaf6ecaeb090743c70" gracePeriod=600 Jan 23 08:08:24 crc kubenswrapper[4784]: E0123 08:08:24.414768 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:08:24 crc kubenswrapper[4784]: I0123 08:08:24.567908 4784 generic.go:334] "Generic (PLEG): container finished" podID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" containerID="383410831a89af123c1795061358a3499af5fea3830dbaeaf6ecaeb090743c70" exitCode=0 Jan 23 08:08:24 crc kubenswrapper[4784]: I0123 08:08:24.567973 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" event={"ID":"ce19e3ac-f68d-40a1-b01a-740a09dc59e1","Type":"ContainerDied","Data":"383410831a89af123c1795061358a3499af5fea3830dbaeaf6ecaeb090743c70"} Jan 23 08:08:24 crc kubenswrapper[4784]: I0123 08:08:24.568404 4784 scope.go:117] "RemoveContainer" containerID="19505267ea5b25506f7f335acc557c58b8cb54b996bf67195f3ea8781b5d6cdf" Jan 23 08:08:24 crc kubenswrapper[4784]: I0123 08:08:24.569327 4784 scope.go:117] "RemoveContainer" containerID="383410831a89af123c1795061358a3499af5fea3830dbaeaf6ecaeb090743c70" Jan 23 08:08:24 crc kubenswrapper[4784]: E0123 08:08:24.569611 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:08:38 crc kubenswrapper[4784]: I0123 08:08:38.254151 4784 scope.go:117] "RemoveContainer" containerID="383410831a89af123c1795061358a3499af5fea3830dbaeaf6ecaeb090743c70" Jan 23 08:08:38 crc kubenswrapper[4784]: E0123 08:08:38.255387 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:08:49 crc kubenswrapper[4784]: I0123 08:08:49.253850 4784 scope.go:117] "RemoveContainer" containerID="383410831a89af123c1795061358a3499af5fea3830dbaeaf6ecaeb090743c70" Jan 23 08:08:49 crc kubenswrapper[4784]: E0123 08:08:49.254522 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:09:00 crc kubenswrapper[4784]: I0123 08:09:00.254160 4784 scope.go:117] "RemoveContainer" containerID="383410831a89af123c1795061358a3499af5fea3830dbaeaf6ecaeb090743c70" Jan 23 08:09:00 crc kubenswrapper[4784]: E0123 08:09:00.255049 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.589289 4784 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tnp2l"] Jan 23 08:09:04 crc kubenswrapper[4784]: E0123 08:09:04.590400 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2133e73-c43c-452b-87e2-73734755ca87" containerName="registry-server" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.590449 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2133e73-c43c-452b-87e2-73734755ca87" containerName="registry-server" Jan 23 08:09:04 crc kubenswrapper[4784]: E0123 08:09:04.590472 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2133e73-c43c-452b-87e2-73734755ca87" containerName="extract-utilities" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.590482 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2133e73-c43c-452b-87e2-73734755ca87" containerName="extract-utilities" Jan 23 08:09:04 crc kubenswrapper[4784]: E0123 08:09:04.590502 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2133e73-c43c-452b-87e2-73734755ca87" containerName="extract-content" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.590510 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2133e73-c43c-452b-87e2-73734755ca87" containerName="extract-content" Jan 23 08:09:04 crc kubenswrapper[4784]: E0123 08:09:04.590542 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3" containerName="copy" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.590551 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3" containerName="copy" Jan 23 08:09:04 crc kubenswrapper[4784]: E0123 08:09:04.590566 4784 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3" containerName="gather" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.590574 4784 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3" containerName="gather" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.590803 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2133e73-c43c-452b-87e2-73734755ca87" containerName="registry-server" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.590831 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3" containerName="gather" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.590855 4784 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c76ae2-b6f6-4c57-87c9-d0d0aa96f3b3" containerName="copy" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.592501 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.595158 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flszr\" (UniqueName: \"kubernetes.io/projected/481dac25-614d-41eb-9061-220a6167f8d2-kube-api-access-flszr\") pod \"redhat-marketplace-tnp2l\" (UID: \"481dac25-614d-41eb-9061-220a6167f8d2\") " pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.595590 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481dac25-614d-41eb-9061-220a6167f8d2-utilities\") pod \"redhat-marketplace-tnp2l\" (UID: \"481dac25-614d-41eb-9061-220a6167f8d2\") " pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.595652 4784 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481dac25-614d-41eb-9061-220a6167f8d2-catalog-content\") pod \"redhat-marketplace-tnp2l\" (UID: \"481dac25-614d-41eb-9061-220a6167f8d2\") " pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.608932 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tnp2l"] Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.698214 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flszr\" (UniqueName: \"kubernetes.io/projected/481dac25-614d-41eb-9061-220a6167f8d2-kube-api-access-flszr\") pod \"redhat-marketplace-tnp2l\" (UID: \"481dac25-614d-41eb-9061-220a6167f8d2\") " pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.698323 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481dac25-614d-41eb-9061-220a6167f8d2-utilities\") pod \"redhat-marketplace-tnp2l\" (UID: \"481dac25-614d-41eb-9061-220a6167f8d2\") " pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.698379 4784 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481dac25-614d-41eb-9061-220a6167f8d2-catalog-content\") pod \"redhat-marketplace-tnp2l\" (UID: \"481dac25-614d-41eb-9061-220a6167f8d2\") " pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.699058 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481dac25-614d-41eb-9061-220a6167f8d2-catalog-content\") pod \"redhat-marketplace-tnp2l\" (UID: \"481dac25-614d-41eb-9061-220a6167f8d2\") " pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.699501 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481dac25-614d-41eb-9061-220a6167f8d2-utilities\") pod \"redhat-marketplace-tnp2l\" (UID: \"481dac25-614d-41eb-9061-220a6167f8d2\") " pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.723938 4784 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flszr\" (UniqueName: \"kubernetes.io/projected/481dac25-614d-41eb-9061-220a6167f8d2-kube-api-access-flszr\") pod \"redhat-marketplace-tnp2l\" (UID: \"481dac25-614d-41eb-9061-220a6167f8d2\") " pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:04 crc kubenswrapper[4784]: I0123 08:09:04.932975 4784 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:05 crc kubenswrapper[4784]: I0123 08:09:05.473376 4784 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tnp2l"] Jan 23 08:09:06 crc kubenswrapper[4784]: I0123 08:09:06.067933 4784 generic.go:334] "Generic (PLEG): container finished" podID="481dac25-614d-41eb-9061-220a6167f8d2" containerID="fb0d9e2202fd1b646c624fd25f33689f15f946d5e9feebf078731b5858390003" exitCode=0 Jan 23 08:09:06 crc kubenswrapper[4784]: I0123 08:09:06.068051 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tnp2l" event={"ID":"481dac25-614d-41eb-9061-220a6167f8d2","Type":"ContainerDied","Data":"fb0d9e2202fd1b646c624fd25f33689f15f946d5e9feebf078731b5858390003"} Jan 23 08:09:06 crc kubenswrapper[4784]: I0123 08:09:06.068358 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tnp2l" event={"ID":"481dac25-614d-41eb-9061-220a6167f8d2","Type":"ContainerStarted","Data":"9023d14bc304ae6e353baa3b4ea482d7c065754579867f5358f03a92665eeb14"} Jan 23 08:09:06 crc kubenswrapper[4784]: I0123 08:09:06.070941 4784 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:09:08 crc kubenswrapper[4784]: I0123 08:09:08.093105 4784 generic.go:334] "Generic (PLEG): container finished" podID="481dac25-614d-41eb-9061-220a6167f8d2" containerID="8ef2c3649fdaa6589a78161a40b2f589a092a16110165fb1bc866ee9b82392cd" exitCode=0 Jan 23 08:09:08 crc kubenswrapper[4784]: I0123 08:09:08.093173 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tnp2l" event={"ID":"481dac25-614d-41eb-9061-220a6167f8d2","Type":"ContainerDied","Data":"8ef2c3649fdaa6589a78161a40b2f589a092a16110165fb1bc866ee9b82392cd"} Jan 23 08:09:09 crc kubenswrapper[4784]: I0123 08:09:09.107140 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tnp2l" event={"ID":"481dac25-614d-41eb-9061-220a6167f8d2","Type":"ContainerStarted","Data":"3d5d0688ca0455ffecf1ed07df0a1b2ce1d883169384cae52880fd0ae400d92a"} Jan 23 08:09:09 crc kubenswrapper[4784]: I0123 08:09:09.137231 4784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tnp2l" podStartSLOduration=2.644374487 podStartE2EDuration="5.137210098s" podCreationTimestamp="2026-01-23 08:09:04 +0000 UTC" firstStartedPulling="2026-01-23 08:09:06.070208372 +0000 UTC m=+6549.302716366" lastFinishedPulling="2026-01-23 08:09:08.563044003 +0000 UTC m=+6551.795551977" observedRunningTime="2026-01-23 08:09:09.124403778 +0000 UTC m=+6552.356911762" watchObservedRunningTime="2026-01-23 08:09:09.137210098 +0000 UTC m=+6552.369718072" Jan 23 08:09:14 crc kubenswrapper[4784]: I0123 08:09:14.254146 4784 scope.go:117] "RemoveContainer" containerID="383410831a89af123c1795061358a3499af5fea3830dbaeaf6ecaeb090743c70" Jan 23 08:09:14 crc kubenswrapper[4784]: E0123 08:09:14.256899 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:09:14 crc kubenswrapper[4784]: I0123 08:09:14.933257 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:14 crc kubenswrapper[4784]: I0123 08:09:14.933638 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:15 crc kubenswrapper[4784]: I0123 08:09:15.055789 4784 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:15 crc kubenswrapper[4784]: I0123 08:09:15.275908 4784 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:15 crc kubenswrapper[4784]: I0123 08:09:15.359194 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tnp2l"] Jan 23 08:09:17 crc kubenswrapper[4784]: I0123 08:09:17.208290 4784 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tnp2l" podUID="481dac25-614d-41eb-9061-220a6167f8d2" containerName="registry-server" containerID="cri-o://3d5d0688ca0455ffecf1ed07df0a1b2ce1d883169384cae52880fd0ae400d92a" gracePeriod=2 Jan 23 08:09:23 crc kubenswrapper[4784]: I0123 08:09:23.269762 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tnp2l_481dac25-614d-41eb-9061-220a6167f8d2/registry-server/0.log" Jan 23 08:09:23 crc kubenswrapper[4784]: I0123 08:09:23.279063 4784 generic.go:334] "Generic (PLEG): container finished" podID="481dac25-614d-41eb-9061-220a6167f8d2" containerID="3d5d0688ca0455ffecf1ed07df0a1b2ce1d883169384cae52880fd0ae400d92a" exitCode=137 Jan 23 08:09:23 crc kubenswrapper[4784]: I0123 08:09:23.279107 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tnp2l" event={"ID":"481dac25-614d-41eb-9061-220a6167f8d2","Type":"ContainerDied","Data":"3d5d0688ca0455ffecf1ed07df0a1b2ce1d883169384cae52880fd0ae400d92a"} Jan 23 08:09:23 crc kubenswrapper[4784]: I0123 08:09:23.552773 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tnp2l_481dac25-614d-41eb-9061-220a6167f8d2/registry-server/0.log" Jan 23 08:09:23 crc kubenswrapper[4784]: I0123 08:09:23.553795 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:23 crc kubenswrapper[4784]: I0123 08:09:23.735955 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flszr\" (UniqueName: \"kubernetes.io/projected/481dac25-614d-41eb-9061-220a6167f8d2-kube-api-access-flszr\") pod \"481dac25-614d-41eb-9061-220a6167f8d2\" (UID: \"481dac25-614d-41eb-9061-220a6167f8d2\") " Jan 23 08:09:23 crc kubenswrapper[4784]: I0123 08:09:23.736267 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481dac25-614d-41eb-9061-220a6167f8d2-catalog-content\") pod \"481dac25-614d-41eb-9061-220a6167f8d2\" (UID: \"481dac25-614d-41eb-9061-220a6167f8d2\") " Jan 23 08:09:23 crc kubenswrapper[4784]: I0123 08:09:23.736335 4784 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481dac25-614d-41eb-9061-220a6167f8d2-utilities\") pod \"481dac25-614d-41eb-9061-220a6167f8d2\" (UID: \"481dac25-614d-41eb-9061-220a6167f8d2\") " Jan 23 08:09:23 crc kubenswrapper[4784]: I0123 08:09:23.737211 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481dac25-614d-41eb-9061-220a6167f8d2-utilities" (OuterVolumeSpecName: "utilities") pod "481dac25-614d-41eb-9061-220a6167f8d2" (UID: "481dac25-614d-41eb-9061-220a6167f8d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:09:23 crc kubenswrapper[4784]: I0123 08:09:23.737882 4784 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481dac25-614d-41eb-9061-220a6167f8d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:09:23 crc kubenswrapper[4784]: I0123 08:09:23.743982 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481dac25-614d-41eb-9061-220a6167f8d2-kube-api-access-flszr" (OuterVolumeSpecName: "kube-api-access-flszr") pod "481dac25-614d-41eb-9061-220a6167f8d2" (UID: "481dac25-614d-41eb-9061-220a6167f8d2"). InnerVolumeSpecName "kube-api-access-flszr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:09:23 crc kubenswrapper[4784]: I0123 08:09:23.776339 4784 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481dac25-614d-41eb-9061-220a6167f8d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "481dac25-614d-41eb-9061-220a6167f8d2" (UID: "481dac25-614d-41eb-9061-220a6167f8d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:09:23 crc kubenswrapper[4784]: I0123 08:09:23.840072 4784 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481dac25-614d-41eb-9061-220a6167f8d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:09:23 crc kubenswrapper[4784]: I0123 08:09:23.840117 4784 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flszr\" (UniqueName: \"kubernetes.io/projected/481dac25-614d-41eb-9061-220a6167f8d2-kube-api-access-flszr\") on node \"crc\" DevicePath \"\"" Jan 23 08:09:24 crc kubenswrapper[4784]: I0123 08:09:24.290889 4784 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tnp2l_481dac25-614d-41eb-9061-220a6167f8d2/registry-server/0.log" Jan 23 08:09:24 crc kubenswrapper[4784]: I0123 08:09:24.292127 4784 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tnp2l" event={"ID":"481dac25-614d-41eb-9061-220a6167f8d2","Type":"ContainerDied","Data":"9023d14bc304ae6e353baa3b4ea482d7c065754579867f5358f03a92665eeb14"} Jan 23 08:09:24 crc kubenswrapper[4784]: I0123 08:09:24.292193 4784 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tnp2l" Jan 23 08:09:24 crc kubenswrapper[4784]: I0123 08:09:24.292209 4784 scope.go:117] "RemoveContainer" containerID="3d5d0688ca0455ffecf1ed07df0a1b2ce1d883169384cae52880fd0ae400d92a" Jan 23 08:09:24 crc kubenswrapper[4784]: I0123 08:09:24.310848 4784 scope.go:117] "RemoveContainer" containerID="8ef2c3649fdaa6589a78161a40b2f589a092a16110165fb1bc866ee9b82392cd" Jan 23 08:09:24 crc kubenswrapper[4784]: I0123 08:09:24.325488 4784 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tnp2l"] Jan 23 08:09:24 crc kubenswrapper[4784]: I0123 08:09:24.336533 4784 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tnp2l"] Jan 23 08:09:24 crc kubenswrapper[4784]: I0123 08:09:24.351904 4784 scope.go:117] "RemoveContainer" containerID="fb0d9e2202fd1b646c624fd25f33689f15f946d5e9feebf078731b5858390003" Jan 23 08:09:25 crc kubenswrapper[4784]: I0123 08:09:25.268871 4784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481dac25-614d-41eb-9061-220a6167f8d2" path="/var/lib/kubelet/pods/481dac25-614d-41eb-9061-220a6167f8d2/volumes" Jan 23 08:09:28 crc kubenswrapper[4784]: I0123 08:09:28.254291 4784 scope.go:117] "RemoveContainer" containerID="383410831a89af123c1795061358a3499af5fea3830dbaeaf6ecaeb090743c70" Jan 23 08:09:28 crc kubenswrapper[4784]: E0123 08:09:28.256571 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:09:41 crc kubenswrapper[4784]: I0123 08:09:41.254655 4784 scope.go:117] "RemoveContainer" containerID="383410831a89af123c1795061358a3499af5fea3830dbaeaf6ecaeb090743c70" Jan 23 08:09:41 crc kubenswrapper[4784]: E0123 08:09:41.256133 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:09:55 crc kubenswrapper[4784]: I0123 08:09:55.254031 4784 scope.go:117] "RemoveContainer" containerID="383410831a89af123c1795061358a3499af5fea3830dbaeaf6ecaeb090743c70" Jan 23 08:09:55 crc kubenswrapper[4784]: E0123 08:09:55.254915 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1" Jan 23 08:10:09 crc kubenswrapper[4784]: I0123 08:10:09.253494 4784 scope.go:117] "RemoveContainer" containerID="383410831a89af123c1795061358a3499af5fea3830dbaeaf6ecaeb090743c70" Jan 23 08:10:09 crc kubenswrapper[4784]: E0123 08:10:09.254443 4784 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r7dpd_openshift-machine-config-operator(ce19e3ac-f68d-40a1-b01a-740a09dc59e1)\"" pod="openshift-machine-config-operator/machine-config-daemon-r7dpd" podUID="ce19e3ac-f68d-40a1-b01a-740a09dc59e1"